scispace - formally typeset
Search or ask a question

Showing papers on "Sampling (signal processing) published in 2014"


Journal ArticleDOI
TL;DR: This paper focuses on the filter design for nonuniformly sampled nonlinear systems which can be approximated by Takagi-Sugeno (T-S) fuzzy systems and derives the linear-matrix-inequality-based sufficient conditions by studying the stochastic stability and the energy-to-peak performance of the estimation error system.
Abstract: This paper focuses on the filter design for nonuniformly sampled nonlinear systems which can be approximated by Takagi-Sugeno (T-S) fuzzy systems. The sampling periods of the measurements are time varying, and the nonuniform observations of the outputs are modeled by a homogenous Markov chain. A mode-dependent estimator with a fast sampling frequency is proposed such that the estimation can track the signal to be estimated with the nonuniformly sampled outputs. The nonlinear systems are discretized with the fast sampling period. By using an augmentation technique, the corresponding stochastic estimation error system is obtained. By studying the stochastic stability and the energy-to-peak performance of the estimation error system, we derive the linear-matrix-inequality-based sufficient conditions. The parameters of the mode-dependent estimator can be calculated by using the proposed iterative algorithm. Two examples are used to demonstrate the design procedure and the efficacy of the proposed design method.

214 citations


Journal ArticleDOI
Jian Fang1, Zongben Xu1, Bingchen Zhang, Wen Hong, Yirong Wu 
TL;DR: A new CS-SAR imaging model based on the use of the approximated SAR observation deducted from the inverse of focusing procedures is formed that can be applied to high-quality and high-resolution imaging under sub-Nyquist rate sampling, while saving the computational cost substantially both in time and memory.
Abstract: In recent years, compressed sensing (CS) has been applied in the field of synthetic aperture radar (SAR) imaging and shows great potential. The existing models are, however, based on application of the sensing matrix acquired by the exact observation functions. As a result, the corresponding reconstruction algorithms are much more time consuming than traditional matched filter (MF)-based focusing methods, especially in high resolution and wide swath systems. In this paper, we formulate a new CS-SAR imaging model based on the use of the approximated SAR observation deducted from the inverse of focusing procedures. We incorporate CS and MF within an sparse regularization framework that is then solved by a fast iterative thresholding algorithm. The proposed model forms a new CS-SAR imaging method that can be applied to high-quality and high-resolution imaging under sub-Nyquist rate sampling, while saving the computational cost substantially both in time and memory. Simulations and real SAR data applications support that the proposed method can perform SAR imaging effectively and efficiently under Nyquist rate, especially for large scale applications.

214 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a sub-Nyquist sampling and recovery approach called Doppler focusing, which performs low rate sampling and digital processing, imposes no restrictions on the transmitter, and uses a compressed sensing dictionary with size.
Abstract: We investigate the problem of a monostatic pulse-Doppler radar transceiver trying to detect targets sparsely populated in the radar's unambiguous time-frequency region. Several past works employ compressed sensing (CS) algorithms to this type of problem but either do not address sample rate reduction, impose constraints on the radar transmitter, propose CS recovery methods with prohibitive dictionary size, or perform poorly in noisy conditions. Here, we describe a sub-Nyquist sampling and recovery approach called Doppler focusing, which addresses all of these problems: it performs low rate sampling and digital processing, imposes no restrictions on the transmitter, and uses a CS dictionary with size, which does not increase with increasing number of pulses P. Furthermore, in the presence of noise, Doppler focusing enjoys a signal-to-noise ratio (SNR) improvement, which scales linearly with P, obtaining good detection performance even at SNR as low as - 25 dB. The recovery is based on the Xampling framework, which allows reduction of the number of samples needed to accurately represent the signal, directly in the analog-to-digital conversion process. After sampling, the entire digital recovery process is performed on the low rate samples without having to return to the Nyquist rate. Finally, our approach can be implemented in hardware using a previously suggested Xampling radar prototype.

177 citations


Patent
08 May 2014
TL;DR: In this paper, the location of a mobile device within a vehicle was determined based on the results of the digital signal processing on the sampled at least two audio signals, based on which the mobile device was located within the driver space of the vehicle during a predetermined period of time.
Abstract: Systems, methods, and devices for determining the location of one or more mobile devices within a vehicle comprising: (a) a controller located within the vehicle and configured to transmit at least two audio signals, a first audio signal directed generally into a driver space within the vehicle and a second audio signal directed generally into a passenger space within the vehicle, and (b) software code stored in memory of the mobile device and having instructions executable by a processor that performs the steps of: (i) detecting the at least two audio signals, (ii) sampling the at least two audio signals for a predetermined period of time; (iii) performing digital signal processing on the sampled at least two audio signals; and (iv) based on the results of the digital signal processing, determining whether the mobile device was located within the driver space of the vehicle during the predetermined period of time.

127 citations


Journal ArticleDOI
TL;DR: The performance of digital backpropagation (DBP) equalization when applied over multiple channels to compensate for the nonlinear impairments in optical fiber transmission systems is investigated and the effectiveness of the algorithm is evaluated.
Abstract: The performance of digital backpropagation (DBP) equalization when applied over multiple channels to compensate for the nonlinear impairments in optical fiber transmission systems is investigated. The impact of a suboptimal multichannel DBP operation is evaluated, where implementation complexity is reduced by varying parameters such as the number of nonlinear steps per span and sampling rate. Results have been obtained for a reference system consisting of a 5×32 Gbaud PDM-16QAM superchannel with 33 GHz subchannel spacing and Nyquist pulse shaping under long-haul transmission. The reduction in the effectiveness of the algorithm is evaluated and compared with the ideal gain expected from the cancellation of the nonlinear signal distortion. The detrimental effects of polarization mode dispersion (PMD) with varying DBP bandwidth are also studied. Key parameters which ensure the effectiveness of multichannel DBP are identified.

125 citations


Journal ArticleDOI
TL;DR: This work proposes a new approach that combines ideas from DMD and compressed sensing to accommodate sub-Nyquist-rate sampling, and correctly identifies the characteristic frequencies and oscillatory modes dominating the signal.
Abstract: Spectral methods are ubiquitous in the analysis of dynamically evolving fluid flows. However, tools like Fourier transformation and dynamic mode decomposition (DMD) require data that satisfy the Nyquist–Shannon sampling criterion. In many fluid flow experiments, such data are impossible to acquire. We propose a new approach that combines ideas from DMD and compressed sensing to accommodate sub-Nyquist-rate sampling. Given a vector-valued signal, we take measurements randomly in time (at a sub-Nyquist rate) and project the data onto a low-dimensional subspace. We then use compressed sensing to identify the dominant frequencies in the signal and their corresponding modes. We demonstrate this method using two examples, analyzing both an artificially constructed dataset and particle image velocimetry data from the flow past a cylinder. In each case, our method correctly identifies the characteristic frequencies and oscillatory modes dominating the signal, proving it to be a capable tool for spectral analysis using sub-Nyquist-rate sampling.

115 citations


Journal ArticleDOI
TL;DR: The PSEC4 custom integrated circuit was designed for the recording of fast waveforms for use in large-area time-of-right detector systems as discussed by the authors, and it employs a switched capacitor array (SCA) 256 samples deep, a ramp-compare ADC with 10.5 bits of DC dynamic range, and a serial data readout with the capability of region of interest windowing to reduce dead time.
Abstract: The PSEC4 custom integrated circuit was designed for the recording of fast waveforms for use in largearea time-of-ight detector systems. The ASIC has been fabricated using the IBM-8RF 0.13 m CMOS process. On each of 6 analog channels, PSEC4 employs a switched capacitor array (SCA) 256 samples deep, a ramp-compare ADC with 10.5 bits of DC dynamic range, and a serial data readout with the capability of region-of-interest windowing to reduce dead time. The sampling rate can be adjusted between 4 and 15 Gigasamples/second [GSa/s] on all channels and is servo-controlled on-chip with a low-jitter delay-locked loop (DLL). The input signals are passively coupled on-chip with a -3 dB analog bandwidth of 1.5 GHz. The power consumption in quiescent sampling mode is less than 50 mW/chip; at a sustained trigger and readout rate of 50 kHz the chip draws 100 mW. After xed-pattern pedestal subtraction, the uncorrected dierential non-linearity is 0.15% over an 750 mV dynamic range. With a linearity correction, a full 1 V signal voltage range is available. The sampling timebase has a xed-pattern non-linearity with an RMS of

86 citations


Journal ArticleDOI
TL;DR: An effective model for frequency response analysis and optimization for such filter banks is developed, various practical issues of fast-convolution processing are explored, and the optimized frequency response characteristics versus direct raised cosine based designs are reported.
Abstract: In this paper, we investigate fast-convolution based highly tunable multirate filter bank (FB) configurations. Based on FFT-IFFT pair with overlapped processing, they offer a way to tune the filters' frequency-domain characteristics in a straightforward way. Different subbands are easily configurable for different bandwidths, center frequencies, and output sampling rates, including also partial or full-band nearly perfect-reconstruction systems. Such FBs find various applications, for example, as flexible multichannel channelization filters for software defined radios. They have also the capability to implement simultaneous waveform processing for multiple single-carrier and/or multicarrier transmission channels with nonuniform bandwidths and subchannel spacings. This paper develops an effective model for frequency response analysis and optimization for such filter banks, explores various practical issues of fast-convolution processing and reports the optimized frequency response characteristics versus direct raised cosine based designs. The fast-convolution approach is shown to be a competitive basis for filter bank based multicarrier waveform processing in terms of spectral containment and complexity.

82 citations


Journal ArticleDOI
TL;DR: It is found that the sampling-based signal detection scheme thus developed can be applied to both binary and multilevel ASK-based CEMC systems, although M-ary systems suffer more from higher BER.
Abstract: In this paper, a comprehensive analysis of the sampling-based optimum signal detection in ideal (i.e., free) diffusion-based concentration-encoded molecular communication (CEMC) system has been presented. A generalized amplitude-shift keying (ASK)-based CEMC system has been considered in diffusion-based noise and intersymbol interference (ISI) conditions. Information is encoded by modulating the amplitude of the transmission rate of information molecules at the TN. The critical issues involved in the sampling-based receiver thus developed are addressed in detail, and its performance in terms of the number of samples per symbol, communication range, and transmission data rate is evaluated. ISI produced by the residual molecules deteriorates the performance of the CEMC system significantly, which further deteriorates when the communication range and/or the transmission data rate increase(s). In addition, the performance of the optimum receiver depends on the receiver's ability to compute the ISI accurately, thus providing a trade-off between receiver complexity and achievable bit error rate (BER). Exact and approximate detection performances have been derived. Finally, it is found that the sampling-based signal detection scheme thus developed can be applied to both binary and multilevel (M-ary) ASK-based CEMC systems, although M-ary systems suffer more from higher BER.

82 citations


Journal ArticleDOI
TL;DR: Reduced data collection, storage and communication requirements are found to lead to substantial reductions in the energy requirements of wireless sensor networks at the expense of modal accuracy.
Abstract: Compressed sensing (CS) is a powerful new data acquisition paradigm that seeks to accurately reconstruct unknown sparse signals from very few (relative to the target signal dimension) random projections. The specific objective of this study is to save wireless sensor energy by using CS to simultaneously reduce data sampling rates, on-board storage requirements, and communication data payloads. For field-deployed low power wireless sensors that are often operated with limited energy sources, reduced communication translates directly into reduced power consumption and improved operational reliability. In this study, acceleration data from a multi-girder steel-concrete deck composite bridge are processed for the extraction of mode shapes. A wireless sensor node previously designed to perform traditional uniform, Nyquist rate sampling is modified to perform asynchronous, effectively sub-Nyquist rate sampling. The sub-Nyquist data are transmitted off-site to a computational server for reconstruction using the CoSaMP matching pursuit recovery algorithm and further processed for extraction of the structure?s mode shapes. The mode shape metric used for reconstruction quality is the modal assurance criterion (MAC), an indicator of the consistency between CS and traditional Nyquist acquired mode shapes. A comprehensive investigation of modal accuracy from a dense set of acceleration response data reveals that MAC values above 0.90 are obtained for the first four modes of a bridge structure when at least 20% of the original signal is sampled using the CS framework. Reduced data collection, storage and communication requirements are found to lead to substantial reductions in the energy requirements of wireless sensor networks at the expense of modal accuracy. Specifically, total energy reductions of 10?60% can be obtained for a sensor network with 10?100 sensor nodes, respectively. The reduced energy requirements of the CS sensor nodes are shown to directly result in improved battery life and communication reliability.

79 citations


Journal ArticleDOI
TL;DR: A zigzag-scan-based permutation is shown to be particularly useful for signals satisfying the newly introduced layer model and increases the peak signal-to-noise ratio of reconstructed images and video frames.
Abstract: Traditional compressed sensing considers sampling a 1D signal. For a multidimensional signal, if reshaped into a vector, the required size of the sensing matrix becomes dramatically large, which increases the storage and computational complexity significantly. To solve this problem, the multidimensional signal is reshaped into a 2D signal, which is then sampled and reconstructed column by column using the same sensing matrix. This approach is referred to as parallel compressed sensing, and it has much lower storage and computational complexity. For a given reconstruction performance of parallel compressed sensing, if a so-called acceptable permutation is applied to the 2D signal, the corresponding sensing matrix is shown to have a smaller required order of restricted isometry property condition, and thus, lower storage and computation complexity at the decoder are required. A zigzag-scan-based permutation is shown to be particularly useful for signals satisfying the newly introduced layer model. As an application of the parallel compressed sensing with the zigzag-scan-based permutation, a video compression scheme is presented. It is shown that the zigzag-scan-based permutation increases the peak signal-to-noise ratio of reconstructed images and video frames.

Journal ArticleDOI
TL;DR: In this paper, a new approach that combines ideas from dynamic mode decomposition and compressed sensing is proposed, which takes measurements randomly in time (at a sub-Nyquist rate) and projects the data onto a low-dimensional subspace.
Abstract: Dynamic mode decomposition (DMD) is a powerful and increasingly popular tool for performing spectral analysis of fluid flows. However, it requires data that satisfy the Nyquist-Shannon sampling criterion. In many fluid flow experiments, such data are impossible to capture. We propose a new approach that combines ideas from DMD and compressed sensing. Given a vector-valued signal, we take measurements randomly in time (at a sub-Nyquist rate) and project the data onto a low-dimensional subspace. We then use compressed sensing to identify the dominant frequencies in the signal and their corresponding modes. We demonstrate this method using two examples, analyzing both an artificially constructed test dataset and particle image velocimetry data collected from the flow past a cylinder. In each case, our method correctly identifies the characteristic frequencies and oscillatory modes dominating the signal, proving the proposed method to be a capable tool for spectral analysis using sub-Nyquist-rate sampling.

Journal ArticleDOI
TL;DR: In this article, a phase-matched electro-optic detection of field transients centered at 45 THz with 12 fs near-infrared gate pulses in AgGaS2 was proposed.
Abstract: In ultrabroadband terahertz electro-optic sampling (EOS), spectral filtering of the gate pulse can strongly reduce the quantum noise while the signal level is only weakly affected. The concept is tested for phase-matched electro-optic detection of field transients centered at 45 THz with 12 fs near-infrared gate pulses in AgGaS2. Our new approach increases the experimental signal-to-noise ratio by a factor of 3 compared to standard EOS. Under certain conditions an improvement factor larger than 5 is predicted by our theoretical analysis.

Journal ArticleDOI
TL;DR: This paper proposes a novel technique, termed under-sampling restoration digital predistortion (USR-DPD), to linearize wideband power amplifiers (PAs) with ADCs that operate at sampling rates much lower than required by Nyquist limits for the predistorted band (under-sampled ADCs).
Abstract: Most conventional wideband digital predistortion (DPD) techniques require the use of a very high-speed analog-to- digital converter (ADC) in the feedback path. This paper proposes a novel technique, termed under-sampling restoration digital predistortion (USR-DPD), to linearize wideband power amplifiers (PAs) with ADCs that operate at sampling rates much lower than required by Nyquist limits for the predistorted band (under-sampling ADCs). The USR processing is implemented in an iterative way to restore full-band PA output information from the under-sampled output signal, allowing memory DPD models to be successfully extracted. The USR-DPD can operate in two modes: without and with a band-limiting filter in the feedback path. In comparison with conventional DPD techniques, the requirement for ADC sampling frequency can be significantly reduced using the USR-DPD approach. Experimental tests were realized for two PAs with numerous signals (10-, 20-, 40-, and 60-MHz long-term evolution signals) using different ADC sampling frequencies. The DPD with the under-sampling ADC could achieve comparable performances to its counterpart with a full-rate ADC, while using 3-5 times lower sampling frequency, and around -50-dBc adjacent channel power ratios were achieved.

Journal ArticleDOI
TL;DR: In this article, the adaptive rate filtering technique based on a level crossing sampling is devised. And the computational complexities and output qualities of the proposed techniques are compared to the classical one for a speech signal.

Journal ArticleDOI
15 Jan 2014-Sensors
TL;DR: Compared with the traditional algorithm based on EKF, the accuracy of the SINS/BD/DVL integrated navigation system is improved, making the proposed nonlinear integrated navigation algorithm feasible and efficient.
Abstract: The integrated navigation system with strapdown inertial navigation system (SINS), Beidou (BD) receiver and Doppler velocity log (DVL) can be used in marine applications owing to the fact that the redundant and complementary information from different sensors can markedly improve the system accuracy. However, the existence of multisensor asynchrony will introduce errors into the system. In order to deal with the problem, conventionally the sampling interval is subdivided, which increases the computational complexity. In this paper, an innovative integrated navigation algorithm based on a Cubature Kalman filter (CKF) is proposed correspondingly. A nonlinear system model and observation model for the SINS/BD/DVL integrated system are established to more accurately describe the system. By taking multi-sensor asynchronization into account, a new sampling principle is proposed to make the best use of each sensor’s information. Further, CKF is introduced in this new algorithm to enable the improvement of the filtering accuracy. The performance of this new algorithm has been examined through numerical simulations. The results have shown that the positional error can be effectively reduced with the new integrated navigation algorithm. Compared with the traditional algorithm based on EKF, the accuracy of the SINS/BD/DVL integrated navigation system is improved, making the proposed nonlinear integrated navigation algorithm feasible and efficient.

Journal ArticleDOI
TL;DR: The experiment results show that the proposed symbol synchronization based on training sequence (TS) has a low complexity and high accuracy even at a sampling frequency offset (SFO) of 5000-ppm.

Journal ArticleDOI
TL;DR: In this paper, a novel time calibration is proposed to determine the true sampling speed of an SCA, which can improve the accuracy of the split-pulse test to less than 3 ps (σ) independently from the delay.
Abstract: Switched capacitor arrays (SCA) ASICs are becoming more and more popular for the readout of detector signals, since the sampling frequency of typically several gigasamples per second allows excellent pile-up rejection and time measurements. They suffer however from the fact that their sampling bins are not equidistant in time, given by limitations of the chip manufacturing. In the past, this has limited time measurements of optimal signals to standard deviations (σ) of about 4-25 ps in accuracy for the split pulse test, depending on the specific chip. This paper introduces a novel time calibration, which determines the true sampling speed of an SCA. Additionally, for two independently running SCA chips, the achieved time resolution improved to less than 3 ps (σ) independently from the delay for the split pulse test, when simply applying a linear interpolation. When using a more advanced analyzing technique for the split pulse test with a single SCA, this limit is pushed below 1 ps (σ) for delays up to 8 ns. Various test measurements with different boards based on the DRS4 ASIC indicate that the new calibration is stable over time but not over larger temperature variations.

Journal ArticleDOI
TL;DR: Results show that multiple deceptive false-target images with finer resolution will be induced after the sub-Nyquist sampled jamming signals dealed with CS-based reconstruction algorithm; hence, the sub theyquist sampling can be adopted in the generation of decoys against ISAR with CS.
Abstract: Shannon-Nyquist theorem indicates that under-sampling at low rates will lead to aliasing in the frequency domain of signal and can be utilized in electronic warfare. However, the question is whether it still works when the compressive sensing (CS) algorithm is applied into reconstruction of target. This paper concerns sub-Nyquist sampled jamming signals and its corresponding influence on inverse synthetic aperture radar (ISAR) imaging via CS. Results show that multiple deceptive false-target images with finer resolution will be induced after the sub-Nyquist sampled jamming signals dealed with CS-based reconstruction algorithm; hence, the sub-Nyquist sampling can be adopted in the generation of decoys against ISAR with CS. Experimental results of the scattering model of the Yak-42 plane and real data are used to verify the correctness of the analyses.

Journal ArticleDOI
TL;DR: This paper uses a task-specific information-based approach to optimizing sensing kernels for high-resolution radar range profiling of man-made targets and uses a Gaussian mixture model for the targets to enable a closed-form gradient of information with respect to the sensing kernel.
Abstract: The design of wideband radar systems is often limited by existing analog-to-digital (A/D) converter technology. State-of-the-art A/D rates and high effective number of bits result in rapidly increasing cost and power consumption for the radar system. Therefore, it is useful to consider compressive sensing methods that enable reduced sampling rate, and in many applications, prior knowledge of signals of interest can be learned from training data and used to design better compressive measurement kernels. In this paper, we use a task-specific information-based approach to optimizing sensing kernels for highresolution radar range profiling of man-made targets. We employ a Gaussian mixture (GM) model for the targets and use a Taylor series expansion of the logarithm of the GM probability distribution to enable a closed-form gradient of information with respect to the sensing kernel. The GM model admits nuisance parameters such as target pose angle and range translation. The gradient is then used in a gradient-based approach to search for the optimal sensing kernel. In numerical simulations, we compare the performance of the proposed sensing kernel design to random projections and to lower-bandwidth waveforms that can be sampled at the Nyquist rate. Simulation results demonstrate that the proposed technique for sensing kernel design can significantly improve performance.

Journal ArticleDOI
TL;DR: A robust and computationally efficient algorithm for removing power line interference from neural recordings, which features a highly robust operation, fast adaptation to interference variations, significant SNR improvement, low computational complexity and memory requirement and straightforward parameter adjustment.
Abstract: Power line interference may severely corrupt neural recordings at 50/60 Hz and harmonic frequencies. In this paper, we present a robust and computationally efficient algorithm for removing power line interference from neural recordings. The algorithm includes four steps. First, an adaptive notch filter is used to estimate the fundamental frequency of the interference. Subsequently, based on the estimated frequency, harmonics are generated by using discrete-time oscillators, and then the amplitude and phase of each harmonic are estimated through using a modified recursive least squares algorithm. Finally, the estimated interference is subtracted from the recorded data. The algorithm does not require any reference signal, and can track the frequency, phase, and amplitude of each harmonic. When benchmarked with other popular approaches, our algorithm performs better in terms of noise immunity, convergence speed, and output signal-to-noise ratio (SNR). While minimally affecting the signal bands of interest, the algorithm consistently yields fast convergence and substantial interference rejection in different conditions of interference strengths (input SNR from -30 dB to 30 dB), power line frequencies (45-65 Hz), and phase and amplitude drifts. In addition, the algorithm features a straightforward parameter adjustment since the parameters are independent of the input SNR, input signal power, and the sampling rate. The proposed algorithm features a highly robust operation, fast adaptation to interference variations, significant SNR improvement, low computational complexity and memory requirement, and straightforward parameter adjustment. These features render the algorithm suitable for wearable and implantable sensor applications, where reliable and real-time cancellation of the interference is desired.

Patent
10 Jun 2014
TL;DR: In this paper, a full-duplex transceiver with componentry and methods for cancellation of nonlinear self-interference signals is presented, where the transceiver is capable of receiving an incoming radio-frequency signal that includes both a desired radiofrequency signal component and a selfinterference component caused by a transceiver's own radiofrequency transmission.
Abstract: A full-duplex transceiver is provided with componentry and methods for cancellation of nonlinear self-interference signals. The transceiver is capable of receiving an incoming radio-frequency signal that includes both a desired radio-frequency signal component and a self-interference component caused by the transceiver's own radio-frequency transmission. The transceiver demodulates the incoming radio-frequency signal to generate a first demodulated signal. The transceiver combines an analog corrective signal with the first demodulated signal to generate a second demodulated signal with reduced self-interference. The transceiver processes the first and second demodulated signals to determine a desired incoming baseband signal and to determine nonlinear components of the self-interference signal, such as nonlinearities introduced by the transceiver's power amplifier.

Journal ArticleDOI
TL;DR: In this article, the authors consider a combined source coding and sub-Nyquist reconstruction problem in which the input to the encoder is a noisy sub-nyquist sampled version of the analog source and derive an expression for the mean squared error in the reconstruction of the process from a noisy and information rate limited version of its samples.
Abstract: The amount of information lost in sub-Nyquist sampling of a continuous-time Gaussian stationary process is quantified. We consider a combined source coding and sub-Nyquist reconstruction problem in which the input to the encoder is a noisy sub-Nyquist sampled version of the analog source. We first derive an expression for the mean squared error in the reconstruction of the process from a noisy and information rate-limited version of its samples. This expression is a function of the sampling frequency and the average number of bits describing each sample. It is given as the sum of two terms: Minimum mean square error in estimating the source from its noisy but otherwise fully observed sub-Nyquist samples, and a second term obtained by reverse waterfilling over an average of spectral densities associated with the polyphase components of the source. We extend this result to multi-branch uniform sampling, where the samples are available through a set of parallel channels with a uniform sampler and a pre-sampling filter in each branch. Further optimization to reduce distortion is then performed over the pre-sampling filters, and an optimal set of pre-sampling filters associated with the statistics of the input signal and the sampling frequency is found. This results in an expression for the minimal possible distortion achievable under any analog to digital conversion scheme involving uniform sampling and linear filtering. These results thus unify the Shannon-Whittaker-Kotelnikov sampling theorem and Shannon rate-distortion theory for Gaussian sources.

Journal ArticleDOI
TL;DR: The technique presented in this paper allows to determine the sampling rate required in order to approximate the continuous time suboptimality bound arbitrarily well and gives insight into the trade-off between sampling time and guaranteed performance.
Abstract: We investigate the impact of sampling on stability and performance estimates in nonlinear model predictive control without stabilizing terminal constraints or costs. Interpreting the sampling period as a discretization parameter, the relation between continuous and discrete time estimates depending on this parameter is analyzed. The technique presented in this paper allows us to determine the sampling rate required in order to approximate the continuous time suboptimality bound arbitrarily well and, thus, gives insight into the trade-off between sampling time and guaranteed performance.

Proceedings ArticleDOI
04 May 2014
TL;DR: It is shown that a minimum average sampling rate of 2(N + 1)B would be sufficient in order to reconstruct the spectrum as well as estimate their corresponding DOA of N narrow-band signals of maximum bandwidth B.
Abstract: Spectrum blind reconstruction and direction-of-arrival (DOA) estimation of multiple narrow-band signals spread over wide spectrum sampled at sub-Nyquist sampling rates are considered in this paper. A new sub-Nyquist sampling receiver architecture which requires minimal hardware along with an efficient algorithm for estimation of the parameters and spectrum reconstruction is presented. We further show with the proposed approach, that a minimum average sampling rate of 2(N + 1)B would be sufficient in order to reconstruct the spectrum as well as estimate their corresponding DOA of N narrow-band signals of maximum bandwidth B. Simulation results are also provided which shows the performance very close to the Cramer Rao bound.

Journal ArticleDOI
Yan Wang1, Li Jingwen1, Jie Chen1, Hua-ping Xu1, Sun Bing1 
TL;DR: A novel parameter-adjusting PFA is presented, which can implement SAR image formation at an extremely highly squint angle with obviously improved computation efficiency and imaging precision and was found to perform better in inducing less phase and amplitude errors in data processing.
Abstract: The polar format algorithm (PFA) is a wavenumber domain imaging method for spotlight synthetic aperture radar (SAR). The classic fixed-parameter PFA employs interpolation technique for data correction. However, such an operation will induce heavy computational load and cause degradation in computation precision. To optimize image formation processing performance, this study presents a novel parameter-adjusting PFA, which can implement SAR image formation at an extremely highly squint angle with obviously improved computation efficiency and imaging precision. In the parameter-adjusting PFA, radar parameters, such as center frequency, chirp rate, pulse duration, sampling rate, and pulse repeat frequency (PRF), vary for each azimuth sampling position. Due to the parameter adjusting strategy, the echoed signal can be acquired directly in keystone format with uniformly distributed azimuth intervals. In this case, range interpolation, which is necessary in the fixed-parameter PFA to convert data from polar format to keystone format, can be eliminated. Chirp z-transform (CZT) can be employed to focus SAR data along the azimuth direction. Compared with truncated sinc-interpolation, CZT was found to perform better in inducing less phase and amplitude errors in data processing. When residual video phase (RVP) compensation was accomplished for dechirped signal, the processing steps of the parameter-adjusting PFA were simplified as azimuth CZTs and range inverse fast Fourier transforms (IFFT). Lastly, computer simulation of multiple point targets validated the presented approach.

Patent
Tao Luo1, Kapil Bhattad1, Zhang Xiaoxia1, Taesang Yoo1, Xiliang Luo1, Ke Liu1 
17 Feb 2014
TL;DR: In this article, the authors proposed a method for selecting samples for SSS detection using non-uniform spacing between sampling intervals to determine a sequence for cell identification, which is based on the effects of any sampled bursts that correspond to a high transmission power portion of a signal from a stronger cell.
Abstract: Methods and apparatus for selecting samples for secondary synchronization signal (SSS) detection are described. Several alternatives are provided for efficient cell identifier detection. In a first alternative, multiple bursts of a signal received from a cell are sampled with non-uniform spacing between sampling intervals to determine a sequence for cell identification. In a second alternative, samples of a first and a second signal received from a stronger cell are cancelled, and a sequence for detecting a weaker cell is determined by reducing effects of the samples of a third signal received from the weaker cell which do not overlap with the primary synchronization signal (PSS) or SSS of the stronger cell. In a third alternative, a sequence for detecting a weaker cell is determined by reducing effects of any sampled bursts that correspond to a high transmission power portion of a signal from a stronger cell.

Journal ArticleDOI
TL;DR: Theoretical analyses and simulations verify that the proposed QuadCS is a valid system to acquire the I and Q components in the received radar signals and prove that the QuadCS system satisfies the restricted isometry property with overwhelming probability.
Abstract: Quadrature sampling has been widely applied in coherent radar systems to extract in-phase and quadrature ( $I$ and $Q$ ) components in the received radar signal. However, the sampling is inefficient because the received signal contains only a small number of significant target signals. This paper incorporates the compressive sampling (CS) theory into the design of the quadrature sampling system, and develops a quadrature compressive sampling (QuadCS) system to acquire the $I$ and $Q$ components with low sampling rate. The QuadCS system first randomly projects the received signal into a compressive bandpass signal and then utilizes the quadrature sampling to output compressive $I$ and $Q$ components. The compressive outputs are used to reconstruct the $I$ and $Q$ components. To understand the system performance, we establish the frequency domain representation of the QuadCS system. With the waveform-matched dictionary, we prove that the QuadCS system satisfies the restricted isometry property with overwhelming probability. For $K$ target signals in the observation interval $T$ , simulations show that the QuadCS requires just ${\cal O}(K\log(BT/K))$ samples to stably reconstruct the signal, where $B$ is the signal bandwidth. The reconstructed signal-to-noise ratio decreases by 3 dB for every octave increase in the target number $K$ and increases by 3 dB for every octave increase in the compressive bandwidth. Theoretical analyses and simulations verify that the proposed QuadCS is a valid system to acquire the $I$ and $Q$ components in the received radar signals.

Journal ArticleDOI
TL;DR: This paper considers the recovery of continuous signals in infinite dimensional spaces from the magnitude of their frequency samples and proposes a sampling scheme which involves a combination of oversampling and modulations with complex exponentials.
Abstract: This paper considers the recovery of continuous signals in infinite dimensional spaces from the magnitude of their frequency samples. It proposes a sampling scheme which involves a combination of oversampling and modulations with complex exponentials. Sufficient conditions are given such that almost every signal with compact support can be reconstructed up to a unimodular constant using only its magnitude samples in the frequency domain. Finally it is shown that an average sampling rate of four times the Nyquist rate is enough to reconstruct almost every time-limited signal.

Journal ArticleDOI
20 Aug 2014
TL;DR: In this paper, the authors demonstrate a significantly increased stretching factor in the photonic time-stretched sampling of a fast microwave waveform, achieving a record stretching factor of 36 based on an equivalent group delay dispersion coefficient of 12,000 µm/nm.
Abstract: Conventional sampling techniques may not be able to meet the ever-increasing demand for increased bandwidth from modern communications and radar signals. Optical time-stretched sampling has been considered an effective solution for wideband microwave signal processing. Here, we demonstrate a significantly increased stretching factor in the photonic time-stretched sampling of a fast microwave waveform. The microwave waveform to be sampled is intensity modulated on a chirped optical pulse generated jointly by a mode-locked laser and a length of dispersion compensating fiber. The pulse is then injected into an optical dispersive loop that includes an erbium-doped fiber amplifier and is stretched by a linearly chirped fiber Bragg grating multiple times in the loop. A record stretching factor of 36 is achieved based on an equivalent group delay dispersion coefficient of 12,000 ps/nm. This result could help address the new challenges imposed on signal processors to operate at a very high sampling rate.