scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Journal of Selected Topics in Signal Processing in 2015"


Journal ArticleDOI
TL;DR: A dynamic mirror descent framework is described which addresses the challenge of adapting to nonstationary environments arising in real-world problems, yielding low theoretical regret bounds and accurate, adaptive, and computationally efficient algorithms which are applicable to broad classes of problems.
Abstract: High-velocity streams of high-dimensional data pose significant “big data” analysis challenges across a range of applications and settings. Online learning and online convex programming play a significant role in the rapid recovery of important or anomalous information from these large datastreams. While recent advances in online learning have led to novel and rapidly converging algorithms, these methods are unable to adapt to nonstationary environments arising in real-world problems. This paper describes a dynamic mirror descent framework which addresses this challenge, yielding low theoretical regret bounds and accurate, adaptive, and computationally efficient algorithms which are applicable to broad classes of problems. The methods are capable of learning and adapting to an underlying and possibly time-varying dynamical model. Empirical results in the context of dynamic texture analysis, solar flare detection, sequential compressed sensing of a dynamic scene, traffic surveillance, tracking self-exciting point processes and network behavior in the Enron email corpus support the core theoretical findings.

220 citations


Journal ArticleDOI
TL;DR: This paper develops a general cognitive radar framework for a radar system engaged in target tracking that includes the higher-level tracking processor and specifies the feedback mechanism and optimization criterion used to obtain the next set of sensor data.
Abstract: Most radar systems employ a feed-forward processing chain in which they first perform some low-level processing of received sensor data to obtain target detections and then pass the processed data on to some higher-level processor such as a tracker, which extracts information to achieve a system objective. System performance can be improved using adaptation between the information extracted from the sensor/processor and the design and transmission of subsequent illuminating waveforms. As such, cognitive radar systems offer much promise. In this paper, we develop a general cognitive radar framework for a radar system engaged in target tracking. The model includes the higher-level tracking processor and specifies the feedback mechanism and optimization criterion used to obtain the next set of sensor data. Both target detection (track initiation/termination) and tracking (state estimation) are addressed. By separating the general principles from the specific application and implementation details, our formulation provides a flexible framework applicable to the general tracking problem. We demonstrate how the general framework may be specialized for a particular problem using a distributed sensor model in which system resources (observation time on each sensor) are allocated to optimize tracking performance. The cognitive radar system is shown to offer significant performance gains over a standard feed-forward system.

208 citations


Journal ArticleDOI
TL;DR: A nonzero privacy rate is possible over additive white Gaussian Noise channels and Rayleigh single input-single (SISO) and multiple input-multiple output (MIMO) channels with infinite samples when an eavesdropper employs a radiometer detector and has uncertainty about his noise variance.
Abstract: In this paper we consider the problem of achieving a positive error-free communications rate without being detected by an eavesdropper—we coin this the privacy rate. Specifically, we analyze the privacy rate over additive white Gaussian Noise (AWGN) channels with finite and infinite number of samples and Rayleigh single input-single (SISO) and multiple input-multiple output (MIMO) channels with infinite samples when an eavesdropper employs a radiometer detector and has uncertainty about his noise variance. Leveraging recent results on the phenomenon of a signal-to-noise ratio (SNR) wall when there is eavesdropper noise power measurement uncertainty, we show that a nonzero privacy rate is possible. We also show that in this scenario, the detector should not necessarily take as many samples as possible.

189 citations


Journal ArticleDOI
TL;DR: The versatility of the SDF framework is demonstrated by means of four diverse applications, which are all solved entirely within Tensorlab's DSL.
Abstract: We present structured data fusion (SDF) as a framework for the rapid prototyping of knowledge discovery in one or more possibly incomplete data sets. In SDF, each data set—stored as a dense, sparse, or incomplete tensor—is factorized with a matrix or tensor decomposition. Factorizations can be coupled, or fused, with each other by indicating which factors should be shared between data sets. At the same time, factors may be imposed to have any type of structure that can be constructed as an explicit function of some underlying variables. With the right choice of decomposition type and factor structure, even well-known matrix factorizations such as the eigenvalue decomposition, singular value decomposition and QR factorization can be computed with SDF. A domain specific language (DSL) for SDF is implemented as part of the software package Tensorlab, with which we offer a library of tensor decompositions and factor structures to choose from. The versatility of the SDF framework is demonstrated by means of four diverse applications, which are all solved entirely within Tensorlab’s DSL.

185 citations


Journal ArticleDOI
Xin Yuan1, Tsung-Han Tsai1, Ruoyu Zhu1, Patrick Llull1, David J. Brady1, Lawrence Carin1 
TL;DR: Experimental reconstructions of hyperspectral datacubes from both simulated and real compressed measurements demonstrate the efficacy of the proposed inversion algorithm, the feasibility of the camera and the benefit of side information.
Abstract: A blind compressive sensing algorithm is proposed to reconstruct hyperspectral images from spectrally-compressed measurements. The wavelength-dependent data are coded and then superposed, mapping the three-dimensional hyperspectral datacube to a two-dimensional image. The inversion algorithm learns a dictionary in situ from the measurements via global-local shrinkage priors. By using RGB images as side information of the compressive sensing system, the proposed approach is extended to learn a coupled dictionary from the joint dataset of the compressed measurements and the corresponding RGB images, to improve reconstruction quality. A prototype camera is built using a liquid-crystal-on-silicon modulator. Experimental reconstructions of hyperspectral datacubes from both simulated and real compressed measurements demonstrate the efficacy of the proposed inversion algorithm, the feasibility of the camera and the benefit of side information.

170 citations


Journal ArticleDOI
TL;DR: In this article, a Bayesian fusion technique for remotely sensed multi-band images is presented, where the observed images are related to the high spectral and high spatial resolution image to be recovered through physical degradations, e.g., spatial and spectral blurring and/or subsampling defined by the sensor characteristics.
Abstract: This paper presents a Bayesian fusion technique for remotely sensed multi-band images. The observed images are related to the high spectral and high spatial resolution image to be recovered through physical degradations, e.g., spatial and spectral blurring and/or subsampling defined by the sensor characteristics. The fusion problem is formulated within a Bayesian estimation framework. An appropriate prior distribution exploiting geometrical considerations is introduced. To compute the Bayesian estimator of the scene of interest from its posterior distribution, a Markov chain Monte Carlo algorithm is designed to generate samples asymptotically distributed according to the target distribution. To efficiently sample from this high-dimension distribution, a Hamiltonian Monte Carlo step is introduced within a Gibbs sampling strategy. The efficiency of the proposed fusion method is evaluated with respect to several state-of-the-art fusion techniques.

162 citations


Journal ArticleDOI
TL;DR: This paper provides an overview of the MPEG-H 3D Audio project and technology and an assessment of the system capabilities and performance.
Abstract: The science and art of Spatial Audio is concerned with the capture, production, transmission, and reproduction of an immersive sound experience. Recently, a new generation of spatial audio technology has been introduced that employs elevated and lowered loudspeakers and thus surpasses previous ‘surround sound’ technology without such speakers in terms of listener immersion and potential for spatial realism. In this context, the ISO/MPEG standardization group has started the MPEG-H 3D Audio development effort to facilitate high-quality bitrate-efficient production, transmission and reproduction of such immersive audio material. The underlying format is designed to provide universal means for carriage of channel-based, object-based and Higher Order Ambisonics based input. High quality reproduction is provided for many output formats from 22.2 and beyond down to 5.1, stereo and binaural reproduction—independently of the original encoding format, thus overcoming the incompatibility between various 3D formats. This paper provides an overview of the MPEG-H 3D Audio project and technology and an assessment of the system capabilities and performance.

147 citations


Journal ArticleDOI
TL;DR: An optimization procedure which monotonically improves the worst-case signal-to-interference-plus-noise-ratio (SINR) at the output of the filter bank as the figure of merit to optimize under both a similarity and an energy constraint on the transmit signal.
Abstract: Assuming unknown target Doppler shift, we focus on robust joint design of the transmit radar waveform and receive Doppler filter bank in the presence of signal-dependent interference. We consider the worst case signal-to-interference-plus-noise-ratio (SINR) at the output of the filter bank as the figure of merit to optimize under both a similarity and an energy constraint on the transmit signal. Based on a suitable reformulation of the original non-convex max-min optimization problem, we develop an optimization procedure which monotonically improves the worst-case SINR and converges to a stationary point. Each iteration of the algorithm, involves both a convex and a generalized fractional programming problem which can be globally solved via the generalized Dinkelbach’s procedure with a polynomial computational complexity. Finally, at the analysis stage, we assess the performance of the new technique versus some counterparts which are available in open literature.

144 citations


Journal ArticleDOI
TL;DR: A novel framework of space-time adaptive processing (STAP) radar is established with FDA as the transmit array and the proposed secondary range dependence compensation (SRDC) method is proposed, demonstrating the superiority of the proposed approach in clutter suppression under range ambiguous clutter scenarios.
Abstract: Airborne radar arrays oriented toward any direction other than sidelooking cause the range dependence of clutter. It is difficult to handle this range dependence problem in the presence of range ambiguity. Frequency diverse array (FDA) employs a small frequency increment across the array elements, which induces the spatial frequency variant with respect to slant range. Thus, it provides extra degrees-of-freedom in range domain. In this paper, a novel framework of space-time adaptive processing (STAP) radar is established with FDA as the transmit array. In the FDA-STAP radar, the range ambiguous clutter can be discriminated in spatial frequency domain. This is due to the fact that the spatial frequencies of the clutters from different slant ranges are distinguishable even though the range ambiguous clutter is within the same range bin. Simultaneously, the FDA-STAP radar induces secondary range dependence of clutter. To alleviate the secondary range dependence of clutter, a secondary range dependence compensation (SRDC) method is proposed for two cases: 1) the target is assumed in the unambiguous range region and 2) the target is assumed in the ambiguous range region. After the range ambiguous clutter is separated in the spatial frequency domain by using the proposed SRDC method, the traditional clutter compensation approach is applied to further align the spectrum distribution of clutter. Our simulation results demonstrate the superiority of the proposed approach in clutter suppression under range ambiguous clutter scenarios.

135 citations


Journal ArticleDOI
TL;DR: It is shown that the staircase mechanism is the optimal noise adding mechanism in a universal context, subject to a conjectured technical lemma, and also proves to be true for one and two dimensional data.
Abstract: Adding Laplacian noise is a standard approach in differential privacy to sanitize numerical data before releasing it In this paper, we propose an alternative noise adding mechanism: the staircase mechanism, which is a geometric mixture of uniform random variables The staircase mechanism can replace the Laplace mechanism in each instance in the literature and for the same level of differential privacy, the performance in each instance improves; the improvement is particularly stark in medium-low privacy regimes We show that the staircase mechanism is the optimal noise adding mechanism in a universal context, subject to a conjectured technical lemma (which we also prove to be true for one and two dimensional data)

133 citations


Journal ArticleDOI
TL;DR: A tractable model for the range information as a function of wireless environment, signal features, and energy detection techniques is established and serves as a cornerstone for the design and analysis of wideband ranging systems.
Abstract: Wideband ranging is essential for numerous emerging applications that rely on accurate location awareness. The quality of range information, which depends on network intrinsic properties and signal processing techniques, affects the localization accuracy. A popular class of ranging techniques is based on energy detection owing to its low-complexity implementation. This paper establishes a tractable model for the range information as a function of wireless environment, signal features, and energy detection techniques. Such a model serves as a cornerstone for the design and analysis of wideband ranging systems. Based on the proposed model, we develop practical soft-decision and hard-decision algorithms. A case study for ranging and localization systems operating in a wireless environment is presented. Sample-level simulations validate our theoretical results.

Journal ArticleDOI
TL;DR: Experiments show that the proposed summarization method successfully reduces the video content while keeping important events, and a power analysis of the system shows that a significant amount of energy can be saved.
Abstract: Battery lifetime is critical for wireless video sensors. To enable battery-powered wireless video sensors, low-power design is required. In this paper, we consider applying multi-view summarization to wireless video sensors to remove redundant contents such that the compression and transmission power can be reduced. A low-complexity online multi-view video summarization scheme is proposed. Experiments show that the proposed summarization method successfully reduces the video content while keeping important events. A power analysis of the system also shows that a significant amount of energy can be saved.

Journal ArticleDOI
TL;DR: Methods that learn the projection of data and find the sparse and/or low-rank coefficients in the low-dimensional latent space and apply spectral clustering to a similarity matrix built from these representations are described.
Abstract: We propose three novel algorithms for simultaneous dimensionality reduction and clustering of data lying in a union of subspaces. Specifically, we describe methods that learn the projection of data and find the sparse and/or low-rank coefficients in the low-dimensional latent space. Cluster labels are then assigned by applying spectral clustering to a similarity matrix built from these representations. Efficient optimization methods are proposed and their non-linear extensions based on kernel methods are presented. Various experiments show that the proposed methods perform better than many competitive subspace clustering methods.

Journal ArticleDOI
TL;DR: A methodology for online learning of square sparsifying transforms is developed and the proposed transform learning algorithms are shown to have a much lower computational cost than online synthesis dictionary learning.
Abstract: Techniques exploiting the sparsity of signals in a transform domain or dictionary have been popular in signal processing. Adaptive synthesis dictionaries have been shown to be useful in applications such as signal denoising, and medical image reconstruction. More recently, the learning of sparsifying transforms for data has received interest. The sparsifying transform model allows for cheap and exact computations. In this paper, we develop a methodology for online learning of square sparsifying transforms. Such online learning can be particularly useful when dealing with big data, and for signal processing applications such as real-time sparse representation and denoising. The proposed transform learning algorithms are shown to have a much lower computational cost than online synthesis dictionary learning. In practice, the sequential learning of a sparsifying transform typically converges faster than batch mode transform learning. Preliminary experiments show the usefulness of the proposed schemes for sparse representation, and denoising.

Journal ArticleDOI
TL;DR: This paper considers a private data collecting scenario where a data collector buys data from multiple data owners and employs anonymization techniques to protect data owners' privacy, and proposes a contract theoretic approach for data collector to deal with the trade-off.
Abstract: With the growing popularity of data mining, privacy has become an issue of growing importance. Privacy can be seen as a special type of goods, in a sense that it can be traded by the owner for incentives. In this paper, we consider a private data collecting scenario where a data collector buys data from multiple data owners and employs anonymization techniques to protect data owners' privacy. Anonymization causes a decline of data utility; therefore, the data owner can only sell his data at a lower price if his privacy is better protected. Can one pursue higher data utility while maintaining acceptable privacy? How to balance the trade-off between privacy protection and data utility is an important question for big data. Considering that different data owners treat privacy differently, and their privacy preferences are unknown to the collector, we propose a contract theoretic approach for data collector to deal with the trade-off. By designing an optimal contract, the collector can make rational decisions on how to pay the data owners, and more importantly, how he should protect the owners' privacy. We show that when the collector requires a large amount of data, he should ask data owners who care privacy less to provide as much as possible data. We also find that whenever the collector requires higher utility of data or the data becomes less profitable, the collector should provide a stronger protection of the owners' privacy. Performance of the proposed contract is evaluated by both numerical simulations and real data experiments.

Journal ArticleDOI
TL;DR: This work considers a multiuser setup whereby different users have (possibly different) delay QoS constraints, and derives the resource allocation policy that maximizes the sum video quality and applies to any quality metric with concave rate-quality mapping.
Abstract: Real-time video demands quality-of-service (QoS) guarantees such as delay bounds for end-user satisfaction. Furthermore, the tolerable delay varies depending on the use case such as live streaming or two-way video conferencing. Due to the inherently stochastic nature of wireless fading channels, deterministic delay bounds are difficult to guarantee. Instead, we propose providing statistical delay guarantees using the concept of effective capacity. We consider a multiuser setup whereby different users have (possibly different) delay QoS constraints. We derive the resource allocation policy that maximizes the sum video quality and applies to any quality metric with concave rate-quality mapping. We show that the optimal operating point per user is such that the rate-distortion slope is the inverse of the supported video source rate per unit bandwidth, a key metric we refer to as the source spectral efficiency. We extend the resource allocation policy to capture video quality-driven adaptive user-subcarrier assignment in wideband channels as well as capture the impact of adaptive modulation and coding. We also solve the alternative problem of fairness-based resource allocation whereby the objective is to maximize the minimum video quality across users. Finally, we derive user admission and scheduling policies that enable selecting a maximal user subset such that all selected users can meet their statistical delay requirement. Results show that video users with differentiated QoS requirements can achieve similar video quality with vastly different resource requirements. Thus, QoS-aware scheduling and resource allocation enable supporting significantly more users under the same resource constraints.

Journal ArticleDOI
TL;DR: The formal security proof and extensive performance evaluation demonstrate the proposed PPDM achieves a higher security level in the honest but curious model with optimized efficiency advantage over the state-of-the-art in terms of both computational and communication overhead.
Abstract: E-healthcare systems have been increasingly facilitating health condition monitoring, disease modeling and early intervention, and evidence-based medical treatment by medical text mining and image feature extraction. Owing to the resource constraint of wearable mobile devices, it is required to outsource the frequently collected personal health information (PHI) into the cloud. Unfortunately, delegating both storage and computation to the untrusted entity would bring a series of security and privacy issues. The existing work mainly focused on fine-grained privacy-preserving static medical text access and analysis, which can hardly afford the dynamic health condition fluctuation and medical image analysis. In this paper, a secure and efficient privacy-preserving dynamic medical text mining and image feature extraction scheme PPDM in cloud-assisted e-healthcare systems is proposed. Firstly, an efficient privacy-preserving fully homomorphic data aggregation is proposed, which serves the basis for our proposed PPDM. Then, an outsourced disease modeling and early intervention is achieved, respectively by devising an efficient privacy-preserving function correlation matching PPDM1 from dynamic medical text mining and designing a privacy-preserving medical image feature extraction PPDM2. Finally, the formal security proof and extensive performance evaluation demonstrate our proposed PPDM achieves a higher security level (i.e., information-theoretic security for input privacy and adaptive chosen ciphertext attack (CCA2) security for output privacy) in the honest but curious model with optimized efficiency advantage over the state-of-the-art in terms of both computational and communication overhead.

Journal ArticleDOI
TL;DR: Real-time streaming of scalable video coded (SVC) videos in vehicular networks is investigated, and novel cooperative vehicle-to-vehicle (V2V) communication methods are proposed that lead to enhanced QoS and QoE compared to the non collaborative scenarios.
Abstract: Real-time streaming of scalable video coded (SVC) videos in vehicular networks is investigated, and novel cooperative vehicle-to-vehicle (V2V) communication methods are proposed. The proposed techniques are based on grouping the moving vehicles into cooperative clusters. Within each cluster, the long term evolution (LTE) system is used to send the data over long range cellular links to a selected cluster head, which multicasts the received video over IEEE 802.11p to vehicles in its cluster. Error concealment techniques are used to compensate the loss of frames that are not delivered on time for real-time video streaming. In addition, resource allocation to select the best subcarriers for LTE transmission is performed in order to enhance the received video quality. Moreover, metrics related to both quality of service (QoS) and quality of experience (QoE) are studied and analyzed for various video sequences of different characteristics. The V2V video streaming techniques are extended to the case of vehicle-to-infrastructure (V2I) communications. Simulation results show that the proposed methods lead to enhanced QoS and QoE compared to the non collaborative scenarios, and their performance tradeoffs compared to recent methods from the literature are discussed.

Journal ArticleDOI
TL;DR: This study designs a greedy SD-MMV algorithm using simultaneous orthogonal matching pursuit, and proves that the proposed greedy algorithm is robust to noise-including its identification of the (unknown) number of endmembers-under a sufficiently low noise level.
Abstract: This paper considers a recently emerged hyperspectral unmixing formulation based on sparse regression of a self-dictionary multiple measurement vector (SD-MMV) model, wherein the measured hyperspectral pixels are used as the dictionary. Operating under the pure pixel assumption, this SD-MMV formalism is special in that it allows simultaneous identification of the endmember spectral signatures and the number of endmembers. Previous SD-MMV studies mainly focus on convex relaxations. In this study, we explore the alternative of greedy pursuit, which generally provides efficient and simple algorithms. In particular, we design a greedy SD-MMV algorithm using simultaneous orthogonal matching pursuit. Intriguingly, the proposed greedy algorithm is shown to be closely related to some existing pure pixel search algorithms, especially, the successive projection algorithm (SPA). Thus, a link between SD-MMV and pure pixel search is revealed. We then perform exact recovery analyses, and prove that the proposed greedy algorithm is robust to noise-including its identification of the (unknown) number of endmembers-under a sufficiently low noise level. The identification performance of the proposed greedy algorithm is demonstrated through both synthetic and real-data experiments.

Journal ArticleDOI
TL;DR: This paper presents several scalable surface reconstruction techniques to generate watertight meshes that preserve sharp features in the geometry common to buildings, and proposes a method to texture-map these models from captured camera imagery to produce photo-realistic models.
Abstract: 3D modeling of building architecture from mobile scanning is a rapidly advancing field. These models are used in virtual reality, gaming, navigation, and simulation applications. State-of-the-art scanning produces accurate point-clouds of building interiors containing hundreds of millions of points. This paper presents several scalable surface reconstruction techniques to generate watertight meshes that preserve sharp features in the geometry common to buildings. Our techniques can automatically produce high-resolution meshes that preserve the fine detail of the environment by performing a ray-carving volumetric approach to surface reconstruction. We present methods to automatically generate 2D floor plans of scanned building environments by detecting walls and room separations. These floor plans can be used to generate simplified 3D meshes that remove furniture and other temporary objects. We propose a method to texture-map these models from captured camera imagery to produce photo-realistic models. We apply these techniques to several data sets of building interiors, including multi-story datasets.

Journal ArticleDOI
TL;DR: It is demonstrated that good privacy properties can be achieved with limited distortion so as not to undermine the original purpose of the publicly released data, e.g., recommendations.
Abstract: We propose a practical methodology to protect a user's private data, when he wishes to publicly release data that is correlated with his private data, to get some utility. Our approach relies on a general statistical inference framework that captures the privacy threat under inference attacks, given utility constraints. Under this framework, data is distorted before it is released, according to a probabilistic privacy mapping. This mapping is obtained by solving a convex optimization problem, which minimizes information leakage under a distortion constraint. We address practical challenges encountered when applying this theoretical framework to real world data. On one hand, the design of optimal privacy mappings requires knowledge of the prior distribution linking private data and data to be released, which is often unavailable in practice. On the other hand, the optimization may become untractable when data assumes values in large size alphabets, or is high dimensional. Our work makes three major contributions. First, we provide bounds on the impact of a mismatched prior on the privacy-utility tradeoff. Second, we show how to reduce the optimization size by introducing a quantization step, and how to generate privacy mappings under quantization. Third, we evaluate our method on two datasets, including a new dataset that we collected, showing correlations between political convictions and TV viewing habits. We demonstrate that good privacy properties can be achieved with limited distortion so as not to undermine the original purpose of the publicly released data, e.g., recommendations.

Journal ArticleDOI
TL;DR: This work introduces auxiliary variable neurons and Lagrange neurons to solve the waveform design problem using the Lagrange programming neural network and analyzes the local stability conditions of the dynamic neuron model to show that the proposed algorithm is a competitive alternative for wave form design with unit modulus and arbitrary spectral shapes.
Abstract: To maximize the transmitted power available in active sensing, the probing waveform should be of constant modulus. On the other hand, in order to adapt to the increasingly crowed radio frequency spectrum and prevent mutual interferences, there are also requirements in the waveform spectral shape. That is to say, the waveform must fulfill constraints in both time and frequency domains. In this work, designing these waveforms is formulated as a nonlinear constrained optimization problem. By introducing auxiliary variable neurons and Lagrange neurons, we solve it using the Lagrange programming neural network. We also analyze the local stability conditions of the dynamic neuron model. Simulation results show that our proposed algorithm is a competitive alternative for waveform design with unit modulus and arbitrary spectral shapes.

Journal ArticleDOI
TL;DR: A novel workflow that marries the freely available geographic information systems (GIS) data and the joint sparsity concept for TomoSAR inversion is proposed and highly accurate tomographic reconstruction is achieved using six interferograms only.
Abstract: With meter-resolution images delivered by modern synthetic aperture radar (SAR) satellites satellites like TerraSAR-X and TanDEM-X, it is now possible to map urban areas from space in very high level of detail using advanced interferometric techniques such as persistent scatterer interferometry and tomographic SAR inversion (TomoSAR), whereas these multi-pass techniques are based on a great number of images. We aim at precise TomoSAR reconstruction while significantly reducing the required number of images by incorporating building a priori knowledge to the estimation. In the paper, we propose a novel workflow that marries the freely available geographic information systems (GIS) data (i.e., 2-D building footprints) and the joint sparsity concept for TomoSAR inversion. Experiments on bistatic TanDEM-X data stacks demonstrate the great potential of the proposed approach, e.g., highly accurate tomographic reconstruction is achieved using six interferograms only.

Journal ArticleDOI
TL;DR: This paper proposes a novel Fourier-assisted phase shifting (FAPS) method for accurate dynamic 3D sensing and proposes an efficient parallel spatial unwrapping strategy that embeds a sparse set of markers in the fringe patterns.
Abstract: Phase shifting profilometry (PSP) and Fourier transform profilometry (FTP) are two well-known fringe analysis methods for 3D sensing. PSP offers high accuracy but requires multiple images; FTP uses a single image but is limited in its accuracy. In this paper, we propose a novel Fourier-assisted phase shifting (FAPS) method for accurate dynamic 3D sensing. Our key observation is that the motion vulnerability of multi-shot PSP can be overcome through single-shot FTP, while the high accuracy of PSP is preserved. Moreover, to solve the phase ambiguity of complex scenes without additional images, we propose an efficient parallel spatial unwrapping strategy that embeds a sparse set of markers in the fringe patterns. Our dynamic 3D sensing system based on the above principles demonstrates superior performance over previous structured light techniques, including PSP, FTP, and Kinect.

Journal ArticleDOI
TL;DR: This paper analytically bound the prediction performance loss of Social-Forecast as compared to that obtained by an omniscient oracle and proves that the bound is sublinear in the number of video arrivals, thereby guaranteeing its short-term performance as well as its asymptotic convergence to the optimal performance.
Abstract: This paper presents a systematic online prediction method (Social-Forecast) that is capable to accurately forecast the popularity of videos promoted by social media. Social-Forecast explicitly considers the dynamically changing and evolving propagation patterns of videos in social media when making popularity forecasts, thereby being situation and context aware. Social-Forecast aims to maximize the forecast reward, which is defined as a tradeoff between the popularity prediction accuracy and the timeliness with which a prediction is issued. The forecasting is performed online and requires no training phase or a priori knowledge. We analytically bound the prediction performance loss of Social-Forecast as compared to that obtained by an omniscient oracle and prove that the bound is sublinear in the number of video arrivals, thereby guaranteeing its short-term performance as well as its asymptotic convergence to the optimal performance. In addition, we conduct extensive experiments using real-world data traces collected from the videos shared in RenRen, one of the largest online social networks in China. These experiments show that our proposed method outperforms existing view-based approaches for popularity prediction (which are not context-aware) by more than 30% in terms of prediction rewards.

Journal ArticleDOI
TL;DR: In this article, a distributed least square reconstruction (DLSR) algorithm is proposed to recover the unknown signal iteratively, by allowing neighboring nodes to communicate with one another and make fast updates.
Abstract: The rapid development of signal processing on graphs provides a new perspective for processing large-scale data associated with irregular domains. In many practical applications, it is necessary to handle massive data sets through complex networks, in which most nodes have limited computing power. Designing efficient distributed algorithms is critical for this task. This paper focuses on the distributed reconstruction of a time-varying bandlimited graph signal based on observations sampled at a subset of selected nodes. A distributed least square reconstruction (DLSR) algorithm is proposed to recover the unknown signal iteratively, by allowing neighboring nodes to communicate with one another and make fast updates. DLSR uses a decay scheme to annihilate the out-of-band energy occurring in the reconstruction process, which is inevitably caused by the transmission delay in distributed systems. Proof of convergence and error bounds for DLSR are provided in this paper, suggesting that the algorithm is able to track time-varying graph signals and perfectly reconstruct time-invariant signals. The DLSR algorithm is numerically experimented with synthetic data and real-world sensor network data, which verifies its ability in tracking slowly time-varying graph signals.

Journal ArticleDOI
TL;DR: This work observes that the inherent sparse characteristics of a STAP problem can be formulated as the low-rankness of clutter covariance matrix when compared to the total adaptive degrees-of-freedom, and also as the sparse interference spectrum on the spatio-temporal domain.
Abstract: We develop space-time adaptive processing (STAP) methods by leveraging the advantages of sparse signal processing techniques in order to detect a slowly-moving target. We observe that the inherent sparse characteristics of a STAP problem can be formulated as the low-rankness of clutter covariance matrix when compared to the total adaptive degrees-of-freedom, and also as the sparse interference spectrum on the spatio-temporal domain. By exploiting these sparse properties, we propose two approaches for estimating the interference covariance matrix. In the first approach, we consider a constrained matrix rank minimization problem (RMP) to decompose the sample covariance matrix into a low-rank positive semidefinite and a diagonal matrix. The solution of RMP is obtained by applying the trace minimization technique and the singular value decomposition with matrix shrinkage operator. Our second approach deals with the atomic norm minimization problem to recover the clutter response-vector that has a sparse support on the spatio-temporal plane. We use convex relaxation based standard sparse-recovery techniques to find the solutions. With extensive numerical examples, we demonstrate the performances of proposed STAP approaches with respect to both the ideal and practical scenarios, involving Doppler-ambiguous clutter ridges, spatial and temporal decorrelation effects. The low-rank matrix decomposition based solution requires secondary measurements as many as twice the clutter rank to attain a near-ideal STAP performance; whereas the spatio-temporal sparsity based approach needs a considerably small number of secondary data.

Journal ArticleDOI
TL;DR: A novel structured Bayesian compressive sensing algorithm with location dependence for high-resolution imaging in ultra-narrowband passive synthetic aperture radar (SAR) systems and demonstrates the superiority of the proposed algorithm in preserving the continuous structure and suppressing isolated components over existing state-of-the-artCompressive sensing methods.
Abstract: In this paper, we develop a novel structured Bayesian compressive sensing algorithm with location dependence for high-resolution imaging in ultra-narrowband passive synthetic aperture radar (SAR) systems. The proposed technique exploits wide-angle and/or multi-angle observations for image resolution enhancement. We first introduce a forward model based on sparse synthetic apertures. The problem of sparse scatterer imaging is formulated as an optimization problem of reconstructing group sparse signals. A logistic Gaussian kernel model, which involves a logistic function and location-dependent Gaussian kernel, and takes the correlation between entire scatterers into account, is then used to encourage the underlying continuity structure of illuminated target scene in a nonparametric Bayesian learning framework. The posterior inference of the proposed method is then provided in the Markov Chain Monte Carlo (MCMC) sampling scheme. The proposed technique enables high-resolution SAR imaging in wide-angle as well as multi-angle observation systems, and the imaging performance is improved by exploiting the underlying structure of the target scene. Simulation and experiment results demonstrate the superiority of the proposed algorithm in preserving the continuous structure and suppressing isolated components over existing state-of-the-art compressive sensing methods.

Journal ArticleDOI
TL;DR: Results of virtual localization tests indicate that accurate localization performance is retained with spherical harmonic representations as low as fourth-order, and several important physical HRTF cues are shown to be present even in a first-order representation.
Abstract: Several methods have recently been proposed for modeling spatially continuous head-related transfer functions (HRTFs) using techniques based on finite-order spherical harmonic expansion. These techniques inherently impart some amount of spatial smoothing to the measured HRTFs. However, the effect this spatial smoothing has on the localization accuracy has not been analyzed. Consequently, the relationship between the order of a spherical harmonic representation for HRTFs and the maximum localization ability that can be achieved with that representation remains unknown. The present study investigates the effect that spatial smoothing has on virtual sound source localization by systematically reducing the order of a spherical-harmonic-based HRTF representation. Results of virtual localization tests indicate that accurate localization performance is retained with spherical harmonic representations as low as fourth-order, and several important physical HRTF cues are shown to be present even in a first-order representation. These results suggest that listeners do not rely on the fine details in an HRTF's spatial structure and imply that some of the theoretically-derived bounds for HRTF sampling may be exceeding perceptual requirements.

Journal ArticleDOI
TL;DR: A novel online framework that could learn from the current traffic situation (or context) in real-time and predict the future traffic by matching the current situation to the most effective prediction model trained using historical data is proposed.
Abstract: With the vast availability of traffic sensors from which traffic information can be derived, a lot of research effort has been devoted to developing traffic prediction techniques, which in turn improve route navigation, traffic regulation, urban area planning, etc. One key challenge in traffic prediction is how much to rely on prediction models that are constructed using historical data in real-time traffic situations, which may differ from that of the historical data and change over time. In this paper, we propose a novel online framework that could learn from the current traffic situation (or context) in real-time and predict the future traffic by matching the current situation to the most effective prediction model trained using historical data. As real-time traffic arrives, the traffic context space is adaptively partitioned in order to efficiently estimate the effectiveness of each base predictor in different situations. We obtain and prove both short-term and long-term performance guarantees (bounds) for our online algorithm. The proposed algorithm also works effectively in scenarios where the true labels (i.e., realized traffic) are missing or become available with delay. Using the proposed framework, the context dimension that is the most relevant to traffic prediction can also be revealed, which can further reduce the implementation complexity as well as inform traffic policy making. Our experiments with real-world data in real-life conditions show that the proposed approach significantly outperforms existing solutions.