scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Signal Processing Magazine in 2014"


Journal ArticleDOI
TL;DR: This article considers product graphs as a graph model that helps extend the application of DSPG methods to large data sets through efficient implementation based on parallelization and vectorization and relates the presented framework to existing methods for large-scale data processing.
Abstract: Analysis and processing of very large data sets, or big data, poses a significant challenge. Massive data sets are collected and studied in numerous domains, from engineering sciences to social networks, biomolecular research, commerce, and security. Extracting valuable information from big data requires innovative approaches that efficiently process large amounts of data as well as handle and, moreover, utilize their structure. This article discusses a paradigm for large-scale data analysis based on the discrete signal processing (DSP) on graphs (DSPG). DSPG extends signal processing concepts and methodologies from the classical signal processing theory to data indexed by general graphs. Big data analysis presents several challenges to DSPG, in particular, in filtering and frequency analysis of very large data sets. We review fundamental concepts of DSPG, including graph signals and graph filters, graph Fourier transform, graph frequency, and spectrum ordering, and compare them with their counterparts from the classical signal processing theory. We then consider product graphs as a graph model that helps extend the application of DSPG methods to large data sets through efficient implementation based on parallelization and vectorization. We relate the presented framework to existing methods for large-scale data processing and illustrate it with an application to data compression.

713 citations


Journal ArticleDOI
TL;DR: In this paper, the authors focus on the challenging problem of hyperspectral image classification, which has recently gained in popularity and attracted the interest of other scientific disciplines such as machine learning, image processing, and computer vision.
Abstract: The technological evolution of optical sensors over the last few decades has provided remote sensing analysts with rich spatial, spectral, and temporal information. In particular, the increase in spectral resolution of hyperspectral images (HSIs) and infrared sounders opens the doors to new application domains and poses new methodological challenges in data analysis. HSIs allow the characterization of objects of interest (e.g., land-cover classes) with unprecedented accuracy, and keeps inventories up to date. Improvements in spectral resolution have called for advances in signal processing and exploitation algorithms. This article focuses on the challenging problem of hyperspectral image classification, which has recently gained in popularity and attracted the interest of other scientific disciplines such as machine learning, image processing, and computer vision. In the remote sensing community, the term classification is used to denote the process that assigns single pixels to a set of classes, while the term segmentation is used for methods aggregating pixels into objects and then assigned to a class.

599 citations


Journal ArticleDOI
TL;DR: This work has shown that disocclusion in image-based rendering (IBR) of viewpoints different from those captured by the cameras can be removed in a context of editing.
Abstract: Image inpainting refers to the process of restoring missing or damaged areas in an image. This field of research has been very active over recent years, boosted by numerous applications: restoring images from scratches or text overlays, loss concealment in a context of impaired image transmission, object removal in a context of editing, or disocclusion in image-based rendering (IBR) of viewpoints different from those captured by the cameras. Although earlier work dealing with disocclusion has been published in [1], the term inpainting first appeared in [2] by analogy with a process used in art restoration.

518 citations


Journal ArticleDOI
TL;DR: The remarkable advantage of CASSI is that the entire data cube is sensed with just a few FPA measurements and, in some cases, with as little as a single FPA shot.
Abstract: Imaging spectroscopy involves the sensing of a large amount of spatial information across a multitude of wavelengths. Conventional approaches to hyperspectral sensing scan adjacent zones of the underlying spectral scene and merge the results to construct a spectral data cube. Push broom spectral imaging sensors, for instance, capture a spectral cube with one focal plane array (FPA) measurement per spatial line of the scene [1], [2]. Spectrometers based on optical bandpass filters sequentially scan the scene by tuning the bandpass filters in steps. The disadvantage of these techniques is that they require scanning a number of zones linearly in proportion to the desired spatial and spectral resolution. This article surveys compressive coded aperture spectral imagers, also known as coded aperture snapshot spectral imagers (CASSI) [1], [3], [4], which naturally embody the principles of compressive sensing (CS) [5], [6]. The remarkable advantage of CASSI is that the entire data cube is sensed with just a few FPA measurements and, in some cases, with as little as a single FPA shot.

487 citations


Journal ArticleDOI
TL;DR: Virtualization makes it possible to run multiple operating systems and multiple applications over the same machine (or set of machines) while guaranteeing isolation and protection of the programs and their data, thus improving the overall system computational efficiency.
Abstract: Current estimates of mobile data traffic in the years to come foresee a 1,000 increase of mobile data traffic in 2020 with respect to 2010, or, equivalently, a doubling of mobile data traffic every year. This unprecedented growth demands a significant increase of wireless network capacity. Even if the current evolution of fourth-generation (4G) systems and, in particular, the advancements of the long-term evolution (LTE) standardization process foresees a significant capacity improvement with respect to third-generation (3G) systems, the European Telecommunications Standards Institute (ETSI) has established a roadmap toward the fifth-generation (5G) system, with the aim of deploying a commercial system by the year 2020 [1]. The European Project named ?Mobile and Wireless Communications Enablers for the 2020 Information Society? (METIS), launched in 2012, represents one of the first international and large-scale research projects on fifth generation (5G) [2]. In parallel with this unparalleled growth of data traffic, our everyday life experience shows an increasing habit to run a plethora of applications specifically devised for mobile devices, (smartphones, tablets, laptops)for entertainment, health care, business, social networking, traveling, news, etc. However, the spectacular growth in wireless traffic generated by this lifestyle is not matched with a parallel improvement on mobile handsets? batteries, whose lifetime is not improving at the same pace [3]. This determines a widening gap between the energy required to run sophisticated applications and the energy available on the mobile handset. A possible way to overcome this obstacle is to enable the mobile devices, whenever possible and convenient, to offload their most energy-consuming tasks to nearby fixed servers. This strategy has been studied for a long time and is reported in the literature under different names, such as cyberforaging [4] or computation offloading [5], [6]. In recent years, a strong impulse to computation offloading has come through cloud computing (CC), which enables the users to utilize resources on demand. The resources made available by a cloud service provider are: 1) infrastructures, such as network devices, storage, servers, etc., 2) platforms, such as operating systems, offering an integrated environment for developing and testing custom applications, and 3) software, in the form of application programs. These three kinds of services are labeled, respectively, as infrastructure as a service, platform as a service, and software as a service. In particular, one of the key features of CC is virtualization, which makes it possible to run multiple operating systems and multiple applications over the same machine (or set of machines), while guaranteeing isolation and protection of the programs and their data. Through virtualization, the number of virtual machines (VMs) can scale on ?demand, thus improving the overall system computational efficiency. Mobile CC (MCC) is a specific case of CC where the user accesses the cloud services through a mobile handset [5]. The major limitations of today?s MCC are the energy consumption associated to the radio access and the latency experienced in reaching the cloud provider through a wide area network (WAN). Mobile users located at the edge of macrocellular networks are particularly disadvantaged in terms of power consumption and, furthermore, it is very difficult to control latency over a WAN. As pointed out in [7]?[9], humans are acutely sensitive to delay and jitter: as latency increases, interactive response suffers. Since the interaction times foreseen in 5G systems, in particular in the so-called tactile Internet [10], are quite small (in the order of milliseconds), a strict latency control must be somehow incorporated in near future MCC. Meeting this constraint requires a deep ?rethinking of the overall service chain, from the physical layer up to virtualization.

458 citations


Journal ArticleDOI
TL;DR: This article provides a review of some modulation formats suited for 5G, enriched by a comparative analysis of their performance in a cellular environment, and by a discussion on their interactions with specific 5G ingredients.
Abstract: Fifth-generation (5G) cellular communications promise to deliver the gigabit experience to mobile users, with a capacity increase of up to three orders of magnitude with respect to current long-term evolution (LTE) systems There is widespread agreement that such an ambitious goal will be realized through a combination of innovative techniques involving different network layers At the physical layer, the orthogonal frequency division multiplexing (OFDM) modulation format, along with its multiple-access strategy orthogonal frequency division multiple access (OFDMA), is not taken for granted, and several alternatives promising larger values of spectral efficiency are being considered This article provides a review of some modulation formats suited for 5G, enriched by a comparative analysis of their performance in a cellular environment, and by a discussion on their interactions with specific 5G ingredients The interaction with a massive multiple-input, multiple-output (MIMO) system is also discussed by employing real channel measurements

446 citations


Journal ArticleDOI
TL;DR: Hyperspectral sensors measure the reflective properties of objects in the visible and short-wave infrared regions (or the mid-wave and long-wave IR regions) of the spectrum to identify targets of interest in a hyperspectral scene by exploiting the spectral signatures of the materials.
Abstract: Over the last decade, hyperspectral imagery (HSI) obtained by remote sensing systems has provided significant information about the spectral characteristics of the materials in the scene. Typically, a hyperspectral spectrometer provides hundreds of narrow contiguous bands over a wide range of the electromagnetic spectrum. Hyperspectral sensors measure the reflective (or emissive) properties of objects in the visible and short-wave infrared (IR) regions (or the mid-wave and long-wave IR regions) of the spectrum. Processing of these data allows algorithms to detect and identify targets of interest in a hyperspectral scene by exploiting the spectral signatures of the materials [1], [2].

444 citations


Journal ArticleDOI
TL;DR: It is argued that location information can aid in addressing several of the key challenges in 5G, complementary to existing and planned technological developments.
Abstract: Fifth-generation (5G) networks will be the first generation to benefit from location information that is sufficiently precise to be leveraged in wireless network design and optimization. We argue that location information can aid in addressing several of the key challenges in 5G, complementary to existing and planned technological developments. These challenges include an increase in traffic and number of devices, robustness for mission-critical services, and a reduction in total energy consumption and latency. This article gives a broad overview of the growing research area of location-aware communications across different layers of the protocol stack. We highlight several promising trends, tradeoffs, and pitfalls.

424 citations


Journal ArticleDOI
TL;DR: The present development of blind HU seems to be converging to a point where the lines between remote sensing-originated ideas and advanced SP and optimization concepts are no longer clear, and insights from both sides would be used to establish better methods.
Abstract: Blind hyperspectral unmixing (HU), also known as unsupervised HU, is one of the most prominent research topics in signal processing (SP) for hyperspectral remote sensing [1], [2]. Blind HU aims at identifying materials present in a captured scene, as well as their compositions, by using high spectral resolution of hyperspectral images. It is a blind source separation (BSS) problem from a SP viewpoint. Research on this topic started in the 1990s in geoscience and remote sensing [3]-[7], enabled by technological advances in hyperspectral sensing at the time. In recent years, blind HU has attracted much interest from other fields such as SP, machine learning, and optimization, and the subsequent cross-disciplinary research activities have made blind HU a vibrant topic. The resulting impact is not just on remote sensing - blind HU has provided a unique problem scenario that inspired researchers from different fields to devise novel blind SP methods. In fact, one may say that blind HU has established a new branch of BSS approaches not seen in classical BSS studies. In particular, the convex geometry concepts - discovered by early remote sensing researchers through empirical observations [3]-[7] and refined by later research - are elegant and very different from statistical independence-based BSS approaches established in the SP field. Moreover, the latest research on blind HU is rapidly adopting advanced techniques, such as those in sparse SP and optimization. The present development of blind HU seems to be converging to a point where the lines between remote sensing-originated ideas and advanced SP and optimization concepts are no longer clear, and insights from both sides would be used to establish better methods.

419 citations


Journal ArticleDOI
TL;DR: In this paper, the authors propose to design beamforming vectors (that describe the amplitudes and phases) to have large inner products with the vectors describing the intended channels and small inner products for non-intended user channels.
Abstract: Transmit beamforming is a versatile technique for signal transmission from an array of antennas to one or multiple users [1]. In wireless communications, the goal is to increase the signal power at the intended user and reduce interference to nonintended users. A high signal power is achieved by transmitting the same data signal from all antennas but with different amplitudes and phases, such that the signal components add coherently at the user. Low interference is accomplished by making the signal components add destructively at nonintended users. This corresponds mathematically to designing beamforming vectors (that describe the amplitudes and phases) to have large inner products with the vectors describing the intended channels and small inner products with nonintended user channels.

417 citations


Journal ArticleDOI
TL;DR: In this article, the authors present an overview of recent advances in nonlinear unmixing modeling, for instance, when there are multiscattering effects or intimate interactions, and several significant contributions have been proposed to overcome the limitations inherent in the LMM.
Abstract: When considering the problem of unmixing hyperspectral images, most of the literature in the geoscience and image processing areas relies on the widely used linear mixing model (LMM). However, the LMM may be not valid, and other nonlinear models need to be considered, for instance, when there are multiscattering effects or intimate interactions. Consequently, over the last few years, several significant contributions have been proposed to overcome the limitations inherent in the LMM. In this article, we present an overview of recent advances in nonlinear unmixing modeling.

Journal ArticleDOI
TL;DR: Tensor decompositions are at the core of many blind source separation (BSS) algorithms, either explicitly or implicitly, and plays a central role in the identification of underdetermined mixtures.
Abstract: Tensor decompositions are at the core of many blind source separation (BSS) algorithms, either explicitly or implicitly. In particular, the canonical polyadic (CP) tensor decomposition plays a central role in the identification of underdetermined mixtures. Despite some similarities, CP and singular value decomposition (SVD) are quite different. More generally, tensors and matrices enjoy different properties, as pointed out in this brief introduction.

Journal ArticleDOI
TL;DR: This article motivates and provides a review for methods that account for spectral variability during hyperspectral unmixing and endmember estimation and a discussion on topics for future work in this area.
Abstract: Variable illumination and environmental, atmospheric, and temporal conditions cause the measured spectral signature for a material to vary within hyperspectral imagery. By ignoring these variations, errors are introduced and propagated throughout hyperspectral image analysis. To develop accurate spectral unmixing and endmember estimation methods, a number of approaches that account for spectral variability have been developed. This article motivates and provides a review for methods that account for spectral variability during hyperspectral unmixing and endmember estimation and a discussion on topics for future work in this area.

Journal ArticleDOI
TL;DR: A recent review of convex optimization algorithms for big data can be found in this article, which aim to reduce the computational, storage, and communications bottlenecks of big data.
Abstract: This article reviews recent advances in convex optimization algorithms for Big Data, which aim to reduce the computational, storage, and communications bottlenecks. We provide an overview of this emerging field, describe contemporary approximation techniques like first-order methods and randomization for scalability, and survey the important role of parallel and distributed computation. The new Big Data algorithms are based on surprisingly simple principles and attain staggering accelerations even on classical problems.

Journal ArticleDOI
TL;DR: The benefits that cloud computing offers for fifth-generation (5G) mobile networks are explored and the implications on the signal processing algorithms are investigated.
Abstract: Cloud computing draws significant attention in the information technology (IT) community as it provides ubiquitous on-demand access to a shared pool of configurable computing resources with minimum management effort. It gains also more impact on the communication technology (CT) community and is currently discussed as an enabler for flexible, cost-efficient and more powerful mobile network implementations. Although centralized baseband pools are already investigated for the radio access network (RAN) to allow for efficient resource usage and advanced multicell algorithms, these technologies still require dedicated hardware and do not offer the same characteristics as cloud-computing platforms, i.e., on-demand provisioning, virtualization, resource pooling, elasticity, service metering, and multitenancy. However, these properties of cloud computing are key enablers for future mobile communication systems characterized by an ultradense deployment of radio access points (RAPs) leading to severe multicell interference in combination with a significant increase of the number of access nodes and huge fluctuations of the rate requirements over time. In this article, we will explore the benefits that cloud computing offers for fifth-generation (5G) mobile networks and investigate the implications on the signal processing algorithms.

Journal ArticleDOI
TL;DR: The opportunities provided by manifold learning for classification of remotely sensed data are demonstrated and limitations and opportunities remain both for research and applications.
Abstract: Advances in hyperspectral sensing provide new capability for characterizing spectral signatures in a wide range of physical and biological systems, while inspiring new methods for extracting information from these data. HSI data often lie on sparse, nonlinear manifolds whose geometric and topological structures can be exploited via manifold-learning techniques. In this article, we focused on demonstrating the opportunities provided by manifold learning for classification of remotely sensed data. However, limitations and opportunities remain both for research and applications. Although these methods have been demonstrated to mitigate the impact of physical effects that affect electromagnetic energy traversing the atmosphere and reflecting from a target, nonlinearities are not always exhibited in the data, particularly at lower spatial resolutions, so users should always evaluate the inherent nonlinearity in the data. Manifold learning is data driven, and as such, results are strongly dependent on the characteristics of the data, and one method will not consistently provide the best results. Nonlinear manifold-learning methods require parameter tuning, although experimental results are typically stable over a range of values, and have higher computational overhead than linear methods, which is particularly relevant for large-scale remote sensing data sets. Opportunities for advancing manifold learning also exist for analysis of hyperspectral and multisource remotely sensed data. Manifolds are assumed to be inherently smooth, an assumption that some data sets may violate, and data often contain classes whose spectra are distinctly different, resulting in multiple manifolds or submanifolds that cannot be readily integrated with a single manifold representation. Developing appropriate characterizations that exploit the unique characteristics of these submanifolds for a particular data set is an open research problem for which hierarchical manifold structures appear to have merit. To date, most work in manifold learning has focused on feature extraction from single images, assuming stationarity across the scene. Research is also needed in joint exploitation of global and local embedding methods in dynamic, multitemporal environments and integration with semisupervised and active learning.

Journal ArticleDOI
TL;DR: A survey of the work in this area with emphasis on advanced signal processing solutions based on network information theoretic concepts is provided, illustrating the considerable performance gains to be expected for standard cellular models.
Abstract: Cloud radio access networks (C-RANs) provide a novel architecture for next-generation wireless cellular systems whereby the baseband processing is migrated from the base stations (BSs) to a control unit (CU) in the ?cloud.? The BSs, which operate as radio units (RUs), are connected via fronthaul links to the managing CU. The fronthaul links carry information about the baseband signals?in the uplink from the RUs to the CU and vice versa in the downlink?in the form of quantized in-phase and quadrature (IQ) samples. Due to the large bit rate produced by the quantized IQ signals, compression prior to transmission on the fronthaul links is deemed to be of critical importance and is receiving considerable attention. This article provides a survey of the work in this area with emphasis on advanced signal processing solutions based on network information theoretic concepts. Analysis and numerical results illustrate the considerable performance gains to be expected for standard cellular models.

Journal ArticleDOI
TL;DR: A survey of recent research on sparsity-driven synthetic aperture radar (SAR) imaging, including the analysis and synthesis-based sparse signal representation formulations for SAR image formation, and recent work on compressed sensing (CS)-based analysis and design of SAR sensing missions.
Abstract: This article presents a survey of recent research on sparsity-driven synthetic aperture radar (SAR) imaging. In particular, it reviews 1) the analysis and synthesis-based sparse signal representation formulations for SAR image formation together with the associated imaging results, 2) sparsity-based methods for wide-angle SAR imaging and anisotropy characterization, 3) sparsity-based methods for joint imaging and autofocusing from data with phase errors, 4) techniques for exploiting sparsity for SAR imaging of scenes containing moving objects, and 5) recent work on compressed sensing (CS)-based analysis and design of SAR sensing missions.

Journal ArticleDOI
TL;DR: This article contributes to the ongoing cross-disciplinary efforts in data science by putting forth encompassing models capturing a wide range of SP-relevant data analytic tasks, such as principal component analysis (PCA), dictionary learning (DL), compressive sampling (CS), and subspace clustering.
Abstract: With pervasive sensors continuously collecting and storing massive amounts of information, there is no doubt this is an era of data deluge. Learning from these large volumes of data is expected to bring significant science and engineering advances along with improvements in quality of life. However, with such a big blessing come big challenges. Running analytics on voluminous data sets by central processors and storage units seems infeasible, and with the advent of streaming data sources, learning must often be performed in real time, typically without a chance to revisit past entries. Workhorse signal processing (SP) and statistical learning tools have to be re-examined in todays high-dimensional data regimes. This article contributes to the ongoing cross-disciplinary efforts in data science by putting forth encompassing models capturing a wide range of SP-relevant data analytic tasks, such as principal component analysis (PCA), dictionary learning (DL), compressive sampling (CS), and subspace clustering. It offers scalable architectures and optimization algorithms for decentralized and online learning problems, while revealing fundamental insights into the various analytic and implementation tradeoffs involved. Extensions of the encompassing models to timely data-sketching, tensor- and kernel-based learning tasks are also provided. Finally, the close connections of the presented framework with several big data tasks, such as network visualization, decentralized and dynamic estimation, prediction, and imputation of network link load traffic, as well as imputation in tensor-based medical imaging are highlighted.

Journal ArticleDOI
TL;DR: The objective of this article is to provide a tutorial overview of detection algorithms used in current hyperspectral imaging systems that operate in the reflective part of the spectrum (0.4 - 24 μm.)
Abstract: Hyperspectral imaging applications are many and span civil, environmental, and military needs. Typical examples include the detection of specific terrain features and vegetation, mineral, or soil types for resource management; detecting and characterizing materials, surfaces, or paints; the detection of man-made materials in natural backgrounds for the purpose of search and rescue; the detection of specific plant species for the purposes of counter narcotics; and the detection of military vehicles for the purpose of defense and intelligence. The objective of this article is to provide a tutorial overview of detection algorithms used in current hyperspectral imaging systems that operate in the reflective part of the spectrum (0.4 - 24 μm.) The same algorithms might be used in the long-wave infrared spectrum; however, the phenomenology is quite different. The covered topics and the presentation style have been chosen to illustrate the strong couplings among the underlying phenomenology, the theoretical framework for algorithm development and analysis, and the requirements of practical applications.

Journal ArticleDOI
TL;DR: The design of future networks calls for new optimization tools that properly handle the existence of multiple objectives and tradeoffs between them, often in a conflicting manner such that improvements in one objective lead to degradation in the other objectives.
Abstract: The evolution of cellular networks is driven by the dream of ubiquitous wireless connectivity: any data service is instantly accessible everywhere. With each generation of cellular networks, we have moved closer to this wireless dream; first by delivering wireless access to voice communications, then by providing wireless data services, and recently by delivering a Wi-Fi-like experience with wide-area coverage and user mobility management. The support for high data rates has been the main objective in recent years [1], as seen from the academic focus on sum-rate optimization and the efforts from standardization bodies to meet the peak rate requirements specified in IMT-Advanced. In contrast, a variety of metrics/objectives are put forward in the technological preparations for fifth-generation (5G) networks: higher peak rates, improved coverage with uniform user experience, higher reliability and lower latency, better energy efficiency (EE), lower-cost user devices and services, better scalability with number of devices, etc. These multiple objectives are coupled, often in a conflicting manner such that improvements in one objective lead to degradation in the other objectives. Hence, the design of future networks calls for new optimization tools that properly handle the existence of multiple objectives and tradeoffs between them.

Journal ArticleDOI
TL;DR: A case study that interprets the output of a melody extraction algorithm for specific excerpts and a comprehensive comparative analysis of melody extraction algorithms based on the results of an international evaluation campaign are provided.
Abstract: Melody extraction algorithms aim to produce a sequence of frequency values corresponding to the pitch of the dominant melody from a musical recording. Over the past decade, melody extraction has emerged as an active research topic, comprising a large variety of proposed algorithms spanning a wide range of techniques. This article provides an overview of these techniques, the applications for which melody extraction is useful, and the challenges that remain. We start with a discussion of ?melody? from both musical and signal processing perspectives and provide a case study that interprets the output of a melody extraction algorithm for specific excerpts. We then provide a comprehensive comparative analysis of melody extraction algorithms based on the results of an international evaluation campaign. We discuss issues of algorithm design, evaluation, and applications that build upon melody extraction. Finally, we discuss some of the remaining challenges in melody extraction research in terms of algorithmic performance, development, and evaluation methodology.

Journal ArticleDOI
TL;DR: Hyperspectral imaging is a powerful technology for remotely inferring the material properties of the objects in a scene of interest with much more accuracy than is possible with conventional three-color images.
Abstract: Hyperspectral imaging is a powerful technology for remotely inferring the material properties of the objects in a scene of interest. Hyperspectral images consist of spatial maps of light intensity variation across a large number of spectral bands or wavelengths; alternatively, they can be thought of as a measurement of the spectrum of light transmitted or reflected from each spatial location in a scene. Because chemical elements have unique spectral signatures, observing the spectra at a high spatial and spectral resolution provides information about the material properties of the scene with much more accuracy than is possible with conventional three-color images. As a result, hyperspectral imaging is used in a variety of important applications, including remote sensing, astronomical imaging, and fluorescence microscopy.

Journal ArticleDOI
TL;DR: This overview article presents ICA, and then its generalization to multiple data sets, IVA, both using mutual information rate, and presents conditions for the identifiability of the given linear mixing model and derive the performance bounds.
Abstract: Starting with a simple generative model and the assumption of statistical independence of the underlying components, independent component analysis (ICA) decomposes a given set of observations by making use of the diversity in the data, typically in terms of statistical properties of the signal. Most of the ICA algorithms introduced to date have considered one of the two types of diversity: non-Gaussianity?i.e., higher-order statistics (HOS)?or, sample dependence. A recent generalization of ICA, independent vector analysis (IVA), generalizes ICA to multiple data sets and adds the use of one more diversity, dependence across multiple data sets for achieving an independent decomposition, jointly across multiple data sets. Finally, both ICA and IVA, when implemented in the complex domain, enjoy the addition of yet another type of diversity, noncircularity of the sources?underlying components. Mutual information rate provides a unifying framework such that all these statistical properties?types of diversity?can be jointly taken into account for achieving the independent decomposition. Most of the ICA methods developed to date can be cast as special cases under this umbrella, as well as the more recently developed IVA methods. In addition, this formulation allows us to make use of maximum likelihood theory to study large sample properties of the estimator, derive the Cram?r?Rao lower bound (CRLB) and determine the conditions for the identifiability of the ICA and IVA models. In this overview article, we first present ICA, and then its generalization to multiple data sets, IVA, both using mutual information rate, present conditions for the identifiability of the given linear mixing model and derive the performance bounds. We address how various methods fall under this umbrella and give examples of performance for a few sample algorithms compared with the performance bound. We then discuss the importance of approaching the performance bound depending on the goal, and use medical image analysis as the motivating example.

Journal ArticleDOI
TL;DR: Approximate low-rank matrix and tensor factorizations play fundamental roles in enhancing the data and extracting latent (hidden) components in model reduction, clustering, feature extraction, classification, and blind source separation applications.
Abstract: A common thread in various approaches for model reduction, clustering, feature extraction, classification, and blind source separation (BSS) is to represent the original data by a lower-dimensional approximation obtained via matrix or tensor (multiway array) factorizations or decompositions. The notion of matrix/tensor factorizations arises in a wide range of important applications and each matrix/tensor factorization makes different assumptions regarding component (factor) matrices and their underlying structures. So choosing the appropriate one is critical in each application domain. Approximate low-rank matrix and tensor factorizations play fundamental roles in enhancing the data and extracting latent (hidden) components.

Journal ArticleDOI
TL;DR: The various methodologies and extensions that make up this family of approaches that make use of nonnegativity in their parameters are examined and presented under a unified framework.
Abstract: Source separation models that make use of nonnegativity in their parameters have been gaining increasing popularity in the last few years, spawning a significant number of publications on the topic. Although these techniques are conceptually similar to other matrix decompositions, they are surprisingly more effective in extracting perceptually meaningful sources from complex mixtures. In this article, we will examine the various methodologies and extensions that make up this family of approaches and present them under a unified framework. We will begin with a short description of the basic concepts and in the subsequent sections we will delve in more details and explore some of the latest extensions.

Journal ArticleDOI
TL;DR: A comprehensive survey of patch-based nonlocal filtering of SAR images, focusing on the two main ingredients of the methods: measuring patch similarity and estimating the parameters of interest from a collection of similar patches.
Abstract: Most current synthetic aperture radar (SAR) systems offer high-resolution images featuring polarimetric, interferometric, multifrequency, multiangle, or multidate information. SAR images, however, suffer from strong fluctuations due to the speckle phenomenon inherent to coherent imagery. Hence, all derived parameters display strong signal-dependent variance, preventing the full exploitation of such a wealth of information. Even with the abundance of despeckling techniques proposed over the last three decades, there is still a pressing need for new methods that can handle this variety of SAR products and efficiently eliminate speckle without sacrificing the spatial resolution. Recently, patch-based filtering has emerged as a highly successful concept in image processing. By exploiting the redundancy between similar patches, it succeeds in suppressing most of the noise with good preservation of texture and thin structures. Extensions of patch-based methods to speckle reduction and joint exploitation of multichannel SAR images (interferometric, polarimetric, or PolInSAR data) have led to the best denoising performance in radar imaging to date. We give a comprehensive survey of patch-based nonlocal filtering of SAR images, focusing on the two main ingredients of the methods: measuring patch similarity and estimating the parameters of interest from a collection of similar patches.

Journal ArticleDOI
TL;DR: Audio is a domain where signal separation has long been considered as a fascinating objective, potentially offering a wide range of new possibilities and experiences in professional and personal contexts, by better taking advantage of audio material and finely analyzing complex acoustic scenes.
Abstract: Audio is a domain where signal separation has long been considered as a fascinating objective, potentially offering a wide range of new possibilities and experiences in professional and personal contexts, by better taking advantage of audio material and finely analyzing complex acoustic scenes. It has thus always been a major area for research in signal separation and an exciting challenge for industrial applications.

Journal ArticleDOI
TL;DR: The sparse Fourier transform (SFT) addresses the big data setting by computing a compressed Fouriertransform using only a subset of the input data, in time smaller than the data set size.
Abstract: The discrete Fourier transform (DFT) is a fundamental component of numerous computational techniques in signal processing and scientific computing. The most popular means of computing the DFT is the fast Fourier transform (FFT). However, with the emergence of big data problems, in which the size of the processed data sets can easily exceed terabytes, the "fast" in FFT is often no longer fast enough. In addition, in many big data applications it is hard to acquire a sufficient amount of data to compute the desired Fourier transform in the first place. The sparse Fourier transform (SFT) addresses the big data setting by computing a compressed Fourier transform using only a subset of the input data, in time smaller than the data set size. The goal of this article is to survey these recent developments, explain the basic techniques with examples and applications in big data, demonstrate tradeoffs in empirical performance of the algorithms, and discuss the connection between the SFT and other techniques for massive data analysis such as streaming algorithms and compressive sensing.

Journal ArticleDOI
TL;DR: Despite the absence of official standards for the 5G, the data rate of 1 Gbit/s per user anywhere for 5G mobile networks is expected to be deployed beyond 2020.
Abstract: It is anticipated that the mobile data traffic will grow 1,000 times higher from 2010 to 2020 with a rate of roughly a factor of two per year [1]. This increasing demand for data in next-generation mobile broadband networks will lead to many challenges for system engineers and service providers. To address these issues and meet the stringent demands in coming years, innovative and practical solutions should be identified that are able to provide higher spectral efficiency, better performance, and broader coverage. Next generations of wireless cellular networks, which are known as fifth generation (5G) or beyond fourth generation (B4G) wireless networks, are expected to produce higher data rates for mobile subscribers in the order of tens of gigabits per second (Gbit/s) and support a wide range of services. Despite the absence of official standards for the 5G, the data rate of 1?Gbit/s per user anywhere for 5G mobile networks is expected to be deployed beyond 2020.