scispace - formally typeset
Search or ask a question

Showing papers on "Sampling (signal processing) published in 2012"


Journal ArticleDOI
TL;DR: This work demonstrates that the photonic approach can deliver on its promise by digitizing a 41 GHz signal with 7.0 effective bits using a photonic ADC built from discrete components, a 4-5 times improvement over the performance of the best electronic ADCs which exist today.
Abstract: Accurate conversion of wideband multi-GHz analog signals into the digital domain has long been a target of analog-to-digital converter (ADC) developers, driven by applications in radar systems, software radio, medical imaging, and communication systems. Aperture jitter has been a major bottleneck on the way towards higher speeds and better accuracy. Photonic ADCs, which perform sampling using ultra-stable optical pulse trains generated by mode-locked lasers, have been investigated for many years as a promising approach to overcome the jitter problem and bring ADC performance to new levels. This work demonstrates that the photonic approach can deliver on its promise by digitizing a 41 GHz signal with 7.0 effective bits using a photonic ADC built from discrete components. This accuracy corresponds to a timing jitter of 15 fs - a 4-5 times improvement over the performance of the best electronic ADCs which exist today. On the way towards an integrated photonic ADC, a silicon photonic chip with core photonic components was fabricated and used to digitize a 10 GHz signal with 3.5 effective bits. In these experiments, two wavelength channels were implemented, providing the overall sampling rate of 2.1 GSa/s. To show that photonic ADCs with larger channel counts are possible, a dual 20-channel silicon filter bank has been demonstrated.

418 citations


Journal ArticleDOI
TL;DR: A new compressed sensing framework is proposed for extracting useful second-order statistics of wideband random signals from digital samples taken at sub-Nyquist rates, exploiting the unique sparsity property of the two-dimensional cyclic spectra of communications signals.
Abstract: For cognitive radio networks, efficient and robust spectrum sensing is a crucial enabling step for dynamic spectrum access. Cognitive radios need to not only rapidly identify spectrum opportunities over very wide bandwidth, but also make reliable decisions in noise-uncertain environments. Cyclic spectrum sensing techniques work well under noise uncertainty, but require high-rate sampling which is very costly in the wideband regime. This paper develops robust and compressive wideband spectrum sensing techniques by exploiting the unique sparsity property of the two-dimensional cyclic spectra of communications signals. To do so, a new compressed sensing framework is proposed for extracting useful second-order statistics of wideband random signals from digital samples taken at sub-Nyquist rates. The time-varying cross-correlation functions of these compressive samples are formulated to reveal the cyclic spectrum, which is then used to simultaneously detect multiple signal sources over the entire wide band. Because the proposed wideband cyclic spectrum estimator utilizes all the cross-correlation terms of compressive samples to extract second-order statistics, it is also able to recover the power spectra of stationary signals as a special case, permitting lossless rate compression even for non-sparse signals. Simulation results demonstrate the robustness of the proposed spectrum sensing algorithms against both sampling rate reduction and noise uncertainty in wireless networks.

249 citations


Journal ArticleDOI
TL;DR: A statistical learning methodology is used to quantify the gap between Mr and Me in a closed form via data fitting, which offers useful design guideline for compressive samplers and develops a two-step compressive spectrum sensing algorithm for wideband cognitive radios as an illustrative application.
Abstract: Compressive sampling techniques can effectively reduce the acquisition costs of high-dimensional signals by utilizing the fact that typical signals of interest are often sparse in a certain domain. For compressive samplers, the number of samples Mr needed to reconstruct a sparse signal is determined by the actual sparsity order Snz of the signal, which can be much smaller than the signal dimension N. However, Snz is often unknown or dynamically varying in practice, and the practical sampling rate has to be chosen conservatively according to an upper bound Smax of the actual sparsity order in lieu of Snz, which can be unnecessarily high. To circumvent such wastage of the sampling resources, this paper introduces the concept of sparsity order estimation, which aims to accurately acquire Snz prior to sparse signal recovery, by using a very small number of samples Me less than Mr. A statistical learning methodology is used to quantify the gap between Mr and Me in a closed form via data fitting, which offers useful design guideline for compressive samplers. It is shown that Me ≥ 1.2Snz log(N/Snz + 2) + 3 for a broad range of sampling matrices. Capitalizing on this gap, this paper also develops a two-step compressive spectrum sensing algorithm for wideband cognitive radios as an illustrative application. The first step quickly estimates the actual sparsity order of the wide spectrum of interest using a small number of samples, and the second step adjusts the total number of collected samples according to the estimated signal sparsity order. By doing so, the overall sampling cost can be minimized adaptively, without degrading the sensing performance.

136 citations


Journal ArticleDOI
TL;DR: A wide bandwidth, compressed sensing based nonuniform sampling (NUS) system with a custom sample-and-hold chip designed to take advantage of a low average sampling rate is presented.
Abstract: We present a wide bandwidth, compressed sensing based nonuniform sampling (NUS) system with a custom sample-and-hold chip designed to take advantage of a low average sampling rate. By sampling signals nonuniformly, the average sample rate can be more than a magnitude lower than the Nyquist rate, provided that these signals have a relatively low information content as measured by the sparsity of their spectrum. The hardware design combines a wideband Indium-Phosphide heterojunction bipolar transistor sample-and-hold with a commercial off-the-shelf analog-to-digital converter to digitize an 800 MHz to 2 GHz band (having 100 MHz of noncontiguous spectral content) at an average sample rate of 236 Ms/s. Signal reconstruction is performed via a nonlinear compressed sensing algorithm, and the challenges of developing an efficient implementation are discussed. The NUS system is a general purpose digital receiver. As an example of its real signal capabilities, measured bit-error-rate data for a GSM channel is presented, and comparisons to a conventional wideband 4.4 Gs/s ADC are made.

136 citations


Journal ArticleDOI
TL;DR: Multi-rate asynchronous sub-Nyquist sampling (MASS) is proposed for wideband spectrum sensing, and is an attractive approach for cognitive radio networks.
Abstract: Multi-rate asynchronous sub-Nyquist sampling (MASS) is proposed for wideband spectrum sensing. Corresponding spectral recovery conditions are derived and the probability of successful recovery is given. Compared to previous approaches, MASS offers lower sampling rate, and is an attractive approach for cognitive radio networks.

128 citations


Book
06 Dec 2012
TL;DR: Acoustic Signals of Animals: Recording, Field Measurements, Analysis and Description finds application of Filters in Bioacoustics, Digital Signal Analysis, Editing, and Synthesis, and Properties of Various Analog Filters and Antialiasing and Antiimaging Filters.
Abstract: Chapter 1 Acoustic Signals of Animals: Recording, Field Measurements, Analysis and Description H. C. Gerhardt 1 Introduction 2 Field Recordings and Measurements 2.1 Equipment 2.2 On-Site Measurements 2.3 Signal Amplitude, Directionality, and Background Noise Levels 2.4 Patterns of Sound Propagation in Natural Habitats 3 Laboratory Analysis of Animal Sounds 3.1 Terminology 3.2 Temporal and Spectral Analysis: Some General Principles 4 Examples of Descriptions and Analyses 4.1 Temporal Properties of Pulsatile Calls 4.2 Amplitude-Time Envelopes 4.3 Relationships between Fine-Scale Temporal and Spectral Properties 4.4 Spectrally Complex Calls 5 Summary References.- Chapter 2 Digital Signal Acquisition and Representation M. Clements 1 Introduction 2 Digital Signal Processing 2.1 Major Applications of DSP 2.2 Definition of Digital Systems 2.3 Difference Equations 3 Digital Filter Frequency Response 3.1 Unit-Sample Response Characterization 3.2 Frequency-Domain Interpretation of Systems 3.3 Frequency-Domain Interpretation of Signals 4 Conversion Between Analog and Digital Data Forms 4.1 The Sampling Theorem 4.2 Signal Recovery by Filtering 4.3 Fourier Transform Relations 4.4 Effects of Sampling Rates 4.5 Reconstruction 5 Fundamental Digital Processing Techniques 5.1 Power Spectra 5.2 Time and Frequency Resolution 5.3 Windows 5.4 Spectral Smoothing 5.5 The Discrete Fourier Transform 5.6 Correlation 5.7 Autocorrelation 5.8 Cross-correlation 5.9 Spectrograms 6 An Intoduction to Some Advanced Topics 6.1 Digital Filtering 6.2 Linear Prediction 6.3 Homomorphic Analysis 7 Summary.- Chapter 3 Digital Signal Analysis, Editing, and Synthesis K. Beeman 1 Introduction 2 Temporal and Spectral Measurements 3 Time-Varying Amplitude Analysis 3.1 Amplitude Envelopes 3.2 Gate Functions 4 Spectral Analysis 4.1 Power Spectrum Features 4.2 Measuring Similarity Among Power Spectra 4.3 Other Spectral Analysis Techniques 5 Spectrographic Analysis 5.1 Spectrogram Generation 5.2 Spectrogram Display 5.3 Spectrogram Parameter Measurements 6 Classification of Naturally Occurring Animal Sounds 6.1 Properties of Ideal Signals 6.1.1 Periodicity 6.1.2 Amplitude Modulation 6.1.3 Frequency Modulation 6.1.4 Biologically Relevant Sound Types 7 Time-varying Frequency Analysis 7.1 Deriving Spectral Contours 7.2 Sound-similarity Comparison 8 Digital Sound Synthesis 8.1 Editing 8.2 Arithmetic Manipulation and Generation of Sound 8.3 Synthesis Models 8.3.1 Tonal Model 8.4 Sources of and A Functions 8.4.1 Mathematically Based Functions 8.4.2 Functions Derived from Natural Sounds 9 Sound Manipulation and Generation Techniques 9.1 Duration Scaling 9.2 Amplitude-Envelope Manipulations 9.3 Spectral Manipulations 9.3.1 Frequency Shifting and Scaling 9.3.2 Frequency Modulation 9.4 Synthesis of Biological Sound Types 9.4.1 Tonal and Polytonal Signals 9.4.2 Pulse-Repetition Signals 9.4.3 Harmonic Signals 9.4.4 Noisy Signals 9.5 Miscellaneous Synthesis Topics 9.5.1 Template Sounds 9.5.2 Noise Removal 10 Summary References.- Chapter 4 Application of Filters in Bioacoustics P. K. Stoddard 1 Introduction 2 General Uses of Filters and Some Cautions 3 Anatomy and Performance of a Filter 4 Properties of Various Analog Filters 5 Antialiasing and Antiimaging Filters 5.1 A/D Conversion Requires an Analog Lowpass Filter 5.2 Choosing an Antialiasing Filter 5.3 D/A Conversion also Requires an Analog Lowpass Filter 5.4 Analog Filters: Passive Versus Active Components 6 Analog Versus Digital Filters

98 citations


Journal ArticleDOI
TL;DR: In this article, a unified view of the area of sparse signal processing is presented in tutorial form by bringing together various fields in which the property of sparsity has been successfully exploited, including sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation.
Abstract: A unified view of the area of sparse signal processing is presented in tutorial form by bringing together various fields in which the property of sparsity has been successfully exploited. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common potential benefits of significant reduction in sampling rate and processing manipulations through sparse signal processing are revealed. The key application domains of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of the sampling process and reconstruction algorithms, linkages are made with random sampling, compressed sensing, and rate of innovation. The redundancy introduced by channel coding in finite and real Galois fields is then related to over-sampling with similar reconstruction algorithms. The error locator polynomial (ELP) and iterative methods are shown to work quite effectively for both sampling and coding applications. The methods of Prony, Pisarenko, and MUltiple SIgnal Classification (MUSIC) are next shown to be targeted at analyzing signals with sparse frequency domain representations. Specifically, the relations of the approach of Prony to an annihilating filter in rate of innovation and ELP in coding are emphasized; the Pisarenko and MUSIC methods are further improvements of the Prony method under noisy environments. The iterative methods developed for sampling and coding applications are shown to be powerful tools in spectral estimation. Such narrowband spectral estimation is then related to multi-source location and direction of arrival estimation in array processing. Sparsity in unobservable source signals is also shown to facilitate source separation in sparse component analysis; the algorithms developed in this area such as linear programming and matching pursuit are also widely used in compressed sensing. Finally, the multipath channel estimation problem is shown to have a sparse formulation; algorithms similar to sampling and coding are used to estimate typical multicarrier communication channels.

93 citations


Posted Content
TL;DR: This work presents for the first time a design and implementation of an Xampling-based hardware prototype that allows sampling of radar signals at rates much lower than Nyquist, and demonstrates by real-time analog experiments that the system is able to maintain reasonable recovery capabilities, while sampling radar signals that require sampling at a rate of about 30 MHz at a total rate of 1 MHz.
Abstract: Traditional radar sensing typically involves matched filtering between the received signal and the shape of the transmitted pulse. Under the confinement of classic sampling theorem this requires that the received signals must first be sampled at twice the baseband bandwidth, in order to avoid aliasing. The growing demands for target distinction capability and spatial resolution imply significant growth in the bandwidth of the transmitted pulse. Thus, correlation based radar systems require high sampling rates, and with the large amounts of data sampled also necessitate vast memory capacity. In addition, real-time processing of the data typically results in high power consumption. Recently, new approaches for radar sensing and detection were introduced, based on the Finite Rate of Innovation and Xampling frameworks. These techniques allow significant reduction in sampling rate, implying potential power savings, while maintaining the system's detection capabilities at high enough SNR. Here we present for the first time a design and implementation of a Xampling-based hardware prototype that allows sampling of radar signals at rates much lower than Nyquist. We demostrate by real-time analog experiments that our system is able to maintain reasonable detection capabilities, while sampling radar signals that require sampling at a rate of about 30MHz at a total rate of 1Mhz.

92 citations


Proceedings ArticleDOI
09 Mar 2012
TL;DR: A novel technique, which is effective at low sampling rates, is introduced, which will make RF fingerprinting more practical for resource constrained devices such as mobile transceivers.
Abstract: RF fingerprinting is a technique, where a transmitter is identified from its electromagnetic emission. Most existing RF fingerprinting techniques require high sampling rates. This paper introduces a novel technique, which is effective at low sampling rates. This make RF fingerprinting more practical for resource constrained devices such as mobile transceivers. The technique is demonstrated with Bluetooth transceivers. A data acquisition system is designed to capture the Bluetooth signals in the 2.4GHz ISM band. A Spectrogram utilizing the Short Time Fourier Transform is used to obtain the energy envelope of the instantaneous transient signal and unique features are extracted from the envelope. The technique adopted for identification of the Bluetooth transmitters has shown promising results as compared to the reported techniques in the literature and have accurately classified the Bluetooth transmitters at low sampling rates.

78 citations


Journal ArticleDOI
TL;DR: In this article, the authors exploit the linear mixture of sources model, that is the assumption that the multichannel signal is composed of a linear combination of sources, each of them having its own spectral signature, and propose new sampling schemes to considerably decrease the number of measurements needed for the acquisition and source separation.
Abstract: With the development of numbers of high resolution data acquisition systems and the global requirement to lower the energy consumption, the development of efficient sensing techniques becomes critical. Recently, Compressed Sampling (CS) techniques, which exploit the sparsity of signals, have allowed to reconstruct signal and images with less measurements than the traditional Nyquist sensing approach. However, multichannel signals like Hyperspectral images (HSI) have additional structures, like inter-channel correlations, that are not taken into account in the classical CS scheme. In this paper we exploit the linear mixture of sources model, that is the assumption that the multichannel signal is composed of a linear combination of sources, each of them having its own spectral signature, and propose new sampling schemes exploiting this model to considerably decrease the number of measurements needed for the acquisition and source separation. Moreover, we give theoretical lower bounds on the number of measurements required to perform reconstruction of both the multichannel signal and its sources. We also proposed optimization algorithms and extensive experimentation on our target application which is HSI, and show that our approach recovers HSI with far less measurements and computational effort than traditional CS approaches.

75 citations


Journal ArticleDOI
TL;DR: The polar form resonant (PFR) controller as discussed by the authors is an extension of the conventional R controller with a nonlinear internal state variable that is transformed into the polar coordinates of the original R controller.
Abstract: This paper presents a new resonant (R) controller. The proposed R controller is input-output equivalent to the conventional R controller but it is internally nonlinear. Its internal state variables are the transformed versions of the conventional R controller into the polar coordinates. It is, thus, given the name of polar form resonant (PFR) controller. While the PFR is totally equivalent to the R controller in continuous-time domain, it offers a much higher structural robustness when it comes to digital implementations. Particularly, it is shown in this paper that the PFR resolves the well-known structural sensitivity of the R controller for applications that need high sampling frequency and have word length limitations. Such a structural sensitivity is conventionally resolved by resorting to the delta-domain realizations. The PFR offers an alternative method to the delta-domain realization technique with even higher degree of robustness and easier stage of adjustment. Moreover, the PFR can easily be enhanced to accommodate frequency variations, a feature that is not easily attainable using the delta-domain method. Feasibility of the PFR controller is verified using a laboratory prototype of a single-phase uninterruptible power supply system operating at high sampling and switching frequencies where the control system is implemented on a field programmable gate array board.

Proceedings ArticleDOI
03 Apr 2012
TL;DR: This ADC overcomes the impact of VCO non-linearity by minimizing the input signal processed by the VCO, which achieves 78.3dB SNDR in a 10MHz signal bandwidth at 600MHz sampling rate, while consuming 16mW power.
Abstract: Voltage-controlled oscillator (VCO) based analog-to-digital conversion presents an attractive means of implementing high-bandwidth oversampling ADCs [1,2]. They exhibit inherent noise-shaping properties and can operate at low supply voltages and high sampling rates [1–3]. However, usage of VCO-based ADCs has been limited due to their nonlinear voltage-to-frequency (V-to-F) transfer characteristic, which severely degrades their distortion performance. Digital calibration is used to combat nonlinearity in an open-loop VCO-based ADC, but 1st-order noise-shaping mandates high OSRs, thus increasing power dissipation in digital circuits, even in a nanometer-scale CMOS process [1]. In [2], nonlinearity is suppressed by embedding the VCO in a ΔΣ loop. While this technique works in principle, the need for large loop gain at high frequencies makes it very difficult to achieve high SNDR. For instance, the suppression level near the band edge is approximately 20dB for a VCO-based 2nd-order modulator operating with an over-sampling ratio (OSR) of 30. Our ADC overcomes the impact of VCO non-linearity by minimizing the input signal processed by the VCO. The prototype achieves 78.3dB SNDR in a 10MHz signal bandwidth at 600MHz sampling rate, while consuming 16mW power.

Journal ArticleDOI
TL;DR: A new method of modeling the signal transmission in underwater acoustic communications when the transmitter and receiver are moving by sampling the transmitter/receiver trajectory at the signal sampling rate and calculating the channel impulse response from the acoustic-field computation is proposed.
Abstract: We propose a new method of modeling the signal transmission in underwater acoustic communications when the transmitter and receiver are moving. The motion-induced channel time variations can be modeled by sampling the transmitter/receiver trajectory at the signal sampling rate and calculating, for each position, the channel impulse response from the acoustic-field computation. This approach, however, would result in high complexity. To reduce the complexity, the channel impulse response is calculated for fewer (waymark ) positions and then interpolated by local splines to recover it at the signal sampling rate. To allow higher distances between waymarks and, thus, further reduction in the complexity, the multipath delays are appropriately adjusted before the interpolation. Because, for every time instant, this method only requires local information from the trajectory, the impulse response can recursively be computed, and therefore, the signal transmission can be modeled for arbitrarily long trajectories. An approach for setting the waymark sampling interval is suggested and investigated. The proposed method is verified by comparing the simulated data with data from real ocean experiments. For a low-frequency shallow-water experiment with a moving source that transmits a tone set, we show that the Doppler spectrum of the received tones is similar in the simulation and experiment. For a higher frequency deep-water experiment with a fast-moving source that transmits orthogonal frequency-division multiplexing (OFDM) communication signals, we investigate the detection performance of a receiver and show that it is similar in the simulation and experiment.

Patent
14 Sep 2012
TL;DR: In this paper, a pixel circuit, disposed at a part where a scanning line and a signal line intersect each other, includes at least an electrooptic element, a drive transistor, a sampling transistor, and a retaining capacitance.
Abstract: A pixel circuit, disposed at a part where a scanning line and a signal line intersect each other, includes at least an electrooptic element, a drive transistor, a sampling transistor, and a retaining capacitance. The drive transistor has a gate connected to an input node, a source connected to an output node, and a drain connected to a predetermined power supply potential and supplies a driving current to the electrooptic element according to a signal potential retained in the retaining capacitance. The electrooptic element has one terminal connected to the output node and another terminal connected to a predetermined potential. The sampling transistor is connected between the input node and the signal line and operates when selected by the scanning line, samples an input signal from the signal line, and retains the input signal in the retaining capacitance. The retaining capacitance is connected to the input node. The pixel circuit further includes a compensating circuit which detects a decrease in the driving current from a side of the output node and feeds back a result of detection to a side of the input node to compensate for a decrease in the driving current, which decrease is attendant on a secular change of the drive transistor.

Journal ArticleDOI
01 Nov 2012
TL;DR: A simple and efficient method for soft shadows from planar area light sources, based on explicit occlusion calculation by raytracing, followed by adaptive image-space filtering, which is accurate and adds minimal overhead and can be performed at real-time frame rates.
Abstract: We develop a simple and efficient method for soft shadows from planar area light sources, based on explicit occlusion calculation by raytracing, followed by adaptive image-space filtering. Since the method is based on Monte Carlo sampling, it is accurate. Since the filtering is in image-space, it adds minimal overhead and can be performed at real-time frame rates. We obtain interactive speeds, using the Optix GPU raytracing framework. Our technical approach derives from recent work on frequency analysis and sheared pixel-light filtering for offline soft shadows. While sample counts can be reduced dramatically, the sheared filtering step is slow, adding minutes of overhead. We develop the theoretical analysis to instead consider axis-aligned filtering, deriving the sampling rates and filter sizes. We also show how the filter size can be reduced as the number of samples increases, ensuring a consistent result that converges to ground truth as in standard Monte Carlo rendering.

Journal ArticleDOI
TL;DR: The Nyquist Folding Receiver (NYFR), an efficient A2I architecture that folds the broadband RF input prior to digitization by a narrowband ADC, enables information recovery with very low computational complexity algorithms in addition to traditional CS reconstruction techniques.
Abstract: Recovering even a small amount of information from a broadband radio frequency (RF) environment using conventional analog-to-digital converter (ADC) technology is computationally complex and presents significant challenges. For sparse or compressible RF environments, an alternate approach to conventional sampling is analog-to-information (A2I) to enable sub-Nyquist rate sampling based on compressive sensing (CS) principles. This paper presents the Nyquist Folding Receiver (NYFR), an efficient A2I architecture that folds the broadband RF input prior to digitization by a narrowband ADC. The folding is achieved by undersampling the RF spectrum with a stream of short pulses that have a phase modulated sampling period. The undersampled signals then fold down into a low pass interpolation filter. The pulse sample time modulation induces a corresponding phase modulation on the received signals that is scaled by an integer modulation index that varies with the Nyquist zone (i.e., fold number), allowing the signals to be separated based on the measured modulation index. Unlike many schemes motivated by CS that randomize the RF prior to digitization, the NYFR substantially preserves signal structure. This enables information recovery with very low computational complexity algorithms in addition to traditional CS reconstruction techniques. The paper includes a comparison of seven other A2I architectures with the NYFR.

25 Sep 2012
TL;DR: A blind procedure for estimating the sampling rate offsets is derived based on the phase drift of the coherence between two signals sampled at different sampling rates and is applicable to speech-absent time segments with slow time-varying interference statistics.
Abstract: Beamforming methods for speech enhancement in wireless acoustic sensor networks (WASNs) have recently attracted the attention of the research community. One of the major obstacles in implementing speech processing algorithms in WASN is the sampling rate offsets between the nodes. As nodes utilize individual clock sources, sampling rate offsets are inevitable and may cause severe performance degradation. In this paper, a blind procedure for estimating the sampling rate offsets is derived. The procedure is applicable to speech-absent time segments with slow time-varying interference statistics. The proposed procedure is based on the phase drift of the coherence between two signals sampled at different sampling rates. Resampling the signals with Lagrange polynomials interpolation method compensates for the sampling rate offsets. An extensive experimental study, utilizing the transfer function generalized sidelobe canceller (TFGSC), exemplifies the problem and its solution.

Journal ArticleDOI
TL;DR: The accuracy and robustness of two compressed sensing algorithms: convex and non-convex (iterative soft thresholding anditeratively re-weighted least squares with local ℓ(0)-norm) in application to two- and three-dimensional datasets are discussed.

Patent
Junya Matsuno1, Tetsuro Itakura1
20 Nov 2012
TL;DR: An embodied ADC as mentioned in this paper is a sampling unit sampling differential input signal to output differential sampled signal which has first and second sampled signals, and it includes a comparator comparing the first-and second-amplified outputs and a correction controller controlling common-mode voltage levels.
Abstract: An embodied ADC includes a sampling unit sampling differential input signal to output differential sampled signal which has first and second sampled signals. The ADC includes a reference signal generator generating first and second reference signals and a preamplifier amplifying the differential sampled signal to output a differential amplification signal having first and second amplified outputs. The preamplifier has a first differential amplifier amplifying the first sampled signal using the first reference signal and a second differential amplifier amplifying the second sampled signal using the second reference signal. The ADC includes a comparator comparing the first and second amplified outputs and a correction controller controlling common-mode voltage levels of the first and second reference signals or common-mode voltage levels of the first and second sampled signals in accordance with the operations of the first and second differential amplifiers.

Patent
13 Jul 2012
TL;DR: In this article, the authors present a system that estimates the signal strength and phase of a self-interference signal, generates a cancellation signal based on this estimate, and then uses the cancellation signal to suppress the selfinterference before sampling received analog signal.
Abstract: Disclosed herein are systems, methods, and computer-readable storage media for enabling improved cancellation of self-interference in full-duplex communications, or the transmitting and receiving of communications in a single frequency band without requiring time, frequency, or code divisions. The system estimates the signal strength and phase of a self-interference signal, generates a cancellation signal based on this estimate, then uses the cancellation signal to suppress the self-interference before sampling received analog signal. After applying the cancellation signal, the system samples and digitizes the remaining analog signal. The digitized signal is then subjected to additional digital cancellation, allowing for extraction of the desired signal.

Journal ArticleDOI
TL;DR: A multichannel scheme based on Gabor frames that exploits the sparsity of signals in time and enables sampling multipulse signals at sub-Nyquist rates is proposed and is flexible and exhibits good noise robustness.
Abstract: We develop sub-Nyquist sampling systems for analog signals comprised of several, possibly overlapping, finite duration pulses with unknown shapes and time positions. Efficient sampling schemes when either the pulse shape or the locations of the pulses are known have been previously developed. To the best of our knowledge, stable and low-rate sampling strategies for continuous signals that are superpositions of unknown pulses without knowledge of the pulse locations have not been derived. The goal in this paper is to fill this gap. We propose a multichannel scheme based on Gabor frames that exploits the sparsity of signals in time and enables sampling multipulse signals at sub-Nyquist rates. Moreover, if the signal is additionally essentially multiband, then the sampling scheme can be adapted to lower the sampling rate without knowing the band locations. We show that, with proper preprocessing, the necessary Gabor coefficients, can be recovered from the samples using standard methods of compressed sensing. In addition, we provide error estimates on the reconstruction and analyze the proposed architecture in the presence of noise.

Journal ArticleDOI
TL;DR: A fourth-order continuous-time RF bandpass ΔΣ ADC has been fabricated in 40 nm CMOS for fs/4 operation around a 2.22 GHz central frequency enabling high oversampling ratios for RF digitization without compromising power-efficient implementation of the DFD.
Abstract: A fourth-order continuous-time RF bandpass ΔΣ ADC has been fabricated in 40 nm CMOS for fs/4 operation around a 2.22 GHz central frequency. A complete system has been implemented on the test chip including the ADC core, the fractional-N PLL with clock generation network, and the digital decimation filters and downconversion (DFD). The quantizers of the ADC are six times interleaved enabling a polyphase structure for the DFD and relaxing clock frequency requirements. This quantization scheme realizes a sampling rate of 8.88 GS/s which is the highest sampling speed for RF bandpass ΔΣ ADCs reported in standard CMOS to date enabling high oversampling ratios for RF digitization without compromising power-efficient implementation of the DFD. Measurements show that the ADC achieves a dynamic range of 48 dB in a band of 80 MHz with an IIP3 of 1 dBm.

Journal ArticleDOI
TL;DR: It is shown that the CS approach is robust to noise and, despite significant spectral overlap, is able to reconstruct high quality spectra from data sets recorded in far less than half the amount of time required for regular sampling.
Abstract: Central to structural studies of biomolecules are multidimensional experiments. These are lengthy to record due to the requirement to sample the full Nyquist grid. Time savings can be achieved through undersampling the indirectly-detected dimensions combined with non-Fourier Transform (FT) processing, provided the experimental signal-to-noise ratio is sufficient. Alternatively, resolution and signal-to-noise can be improved within a given experiment time. However, non-FT based reconstruction of undersampled spectra that encompass a wide signal dynamic range is strongly impeded by the non-linear behaviour of many methods, which further compromises the detection of weak peaks. Here we show, through an application to a larger α-helical membrane protein under crowded spectral conditions, the potential use of compressed sensing (CS) l 1-norm minimization to reconstruct undersampled 3D NOESY spectra. Substantial signal overlap and low sensitivity make this a demanding application, which strongly benefits from the improvements in signal-to-noise and resolution per unit time achieved through the undersampling approach. The quality of the reconstructions is assessed under varying conditions. We show that the CS approach is robust to noise and, despite significant spectral overlap, is able to reconstruct high quality spectra from data sets recorded in far less than half the amount of time required for regular sampling.

Journal ArticleDOI
TL;DR: In this paper, the authors present a compliant algorithm that corrects for both clock and laser noise in the case of a rotating, non-breathing LISA constellation, and consider the current optical bench design (split interferometry configuration), i.e. the test mass readout is done by the local oscillators only, instead of reflecting the weak inter-spacecraft light off the test masses.
Abstract: Laser phase noise is the dominant noise source in the on-board measurements of the space-based gravitational wave detector LISA (Laser Interferometer Space Antenna). A well-known data analysis technique, the so-called timedelay interferometry (TDI), provides synthesized data streams free of laser phase noise. At the same time, TDI also removes the next largest noise source: phase fluctuations of the on-board clocks which distort the sampling process. TDI needs precise information about the spacecraft separations, sampling times and differential clock noise between the three spacecrafts. These are measured using auxiliary modulations on the laser light. Hence, there is a need for algorithms that account for clock noise removal schemes combined with TDI while preserving the gravitational wave signal. In this paper, we will present the mathematical formulation of the LISA-like data streams and discuss a compliant algorithm that corrects for both clock and laser noise in the case of a rotating, non-breathing LISA constellation. In contrast to previous papers, we consider the current optical bench design (split interferometry configuration), i.e. the test mass readout is done by the local oscillators only, instead of reflecting the weak inter-spacecraft light off the test mass. Furthermore, the absolute order of laser frequencies is taken into account and it can be shown that the TDI equations remain invariant. This is a crucial issue and was, up to now, completely neglected in the analysis.

Journal ArticleDOI
TL;DR: This paper demonstrates a new channel-selective multi-cell processing predistortion technique that compensates for the nonlinearities of multi-carrier transmitters and significantly reduces the minimum sampling rate requirements of analog-to-digital and digital- to-analog converters.
Abstract: This paper demonstrates a new channel-selective multi-cell processing predistortion technique that compensates for the nonlinearities of multi-carrier transmitters. The proposed technique uses independent processing cells to compensate for the intra-band and inter-band distortions of nonlinear transmitters. This frequency-selective feature of the proposed technique significantly reduces the minimum sampling rate requirements of analog-to-digital and digital-to-analog converters, which are a critical issue for conventional digital predistortion (DPD) techniques dealing with wideband signals. The proposed technique was evaluated with four-carrier (1001) and six-carrier (100001) WCDMA signals, using a nonlinear 10-Watt power amplifier. The performance of the proposed technique was compared with look-up table, multi-branch and recently proposed frequency-selective DPDs, in terms of adjacent-channel power ratios (ACPRs) and sampling rate requirements. The proposed technique improved the ACPR and the carrier-to-intermodulation power ratio (CIMPR) of the 1001 WCDMA signal by more than 13 dB and 10 dB, respectively.

Journal ArticleDOI
TL;DR: A sub-Nyquist rate data acquisition front-end based on compressive sensing theory that randomizes a sparse input signal by mixing it with pseudo-random number sequences and exploits the signal sparsity to reconstruct the signal with high fidelity.
Abstract: This paper presents a sub-Nyquist rate data acquisition front-end based on compressive sensing theory. The front-end randomizes a sparse input signal by mixing it with pseudo-random number sequences, followed by analog-to-digital converter sampling at sub-Nyquist rate. The signal is then reconstructed using an L1-based optimization algorithm that exploits the signal sparsity to reconstruct the signal with high fidelity. The reconstruction is based on a priori signal model information, such as a multi-tone frequency-sparse model which matches the input signal frequency support. Wideband multi-tone test signals with 4% sparsity in 5~500 MHz band were used to experimentally verify the front-end performance. Single-tone and multi-tone tests show maximum signal to noise and distortion ratios of 40 dB and 30 dB, respectively, with an equivalent sampling rate of 1 GS/s. The analog front-end was fabricated in a 90 nm complementary metal-oxide-semiconductor process and consumes 55 mW. The front-end core occupies 0.93 mm2.

Journal ArticleDOI
TL;DR: The printed volume hologram has been able to reconstruct a mono- chrome 3-D image by white light illumination, and realized the full-parallax image.
Abstract: A computer-generated hologram (CGH) is well-known to recon- struct 3-D images faithfully, and several CGH printers are reported. Since those printers can only output a transmission hologram, the large-scale optical system is necessary to reconstruct the full-parallax, full-color image. As a method of a simple reconstruction, it is only necessary to use a volume reflection hologram. However, the making of a volume holo- gram needs to transfer the CGH by means of an optical system. On the other hand, there are printers that output volume type holographic stereo- grams with the full-parallax, full-color image. However, the reconstructed image whose depth is large gets blurred due to the insufficient sampling rays of a 3-D object. The authors propose the volume hologram printer to record the wavefront of a 3-D object. By transferring the CGH that is dis- played on a liquid crystal on silicon, the proposed printer can output the volume hologram. In addition, the large volume hologram can be printed by transferring plural CGH that recorded the partial 3-D object in turn. As a result, the printed volume hologram has been able to reconstruct a mono- chrome 3-D image by white light illumination, and realized the full-parallax image. © 2012 Society of Photo-Optical Instrumentation Engineers (SPIE). (DOI: 10.1117/1 .OE.51.7.075802)

Proceedings ArticleDOI
05 Nov 2012
TL;DR: A method is presented based on comparing the arrival times of two chirp signals and approximating the relation between this time difference and the Doppler shift ratio, which demonstrates improvement compared to commonly used benchmark methods in terms of accuracy of the Dooppler shift estimation at near-Nyquist baseband sampling rates.
Abstract: In this paper, we consider the problem of estimating the coarse Doppler shift ratio for underwater acoustic communication (UWAC). Since underwater the constant motion of nodes results in Doppler shifts that significantly distort received signals, estimating the Doppler shift and compensating for it is required for all UWAC applications. Different than for terrestrial radio-frequency where the Doppler effect is modeled by a frequency shift, due to the slow sound speed in water, the effect of transceiver motion on the duration of the symbol cannot be neglected. Furthermore, since the carrier frequency and the signal bandwidth are of the same order, UWAC signals are considered wideband and Doppler-induced frequency shifts cannot be assumed fixed throughout the signal bandwidth. Considering these challenges, we present a method for Doppler-shift estimation based on comparing the arrival times of two chirp signals and approximating the relation between this time difference and the Doppler shift ratio. This analysis also provides an interesting insight about the resilience of chirp signals to Doppler shift. Our simulation results demonstrate improvement compared to commonly used benchmark methods in terms of accuracy of the Doppler shift estimation at near-Nyquist baseband sampling rates.

Patent
09 May 2012
TL;DR: In this article, a self-adaptive voice endpoint detection method was proposed for automatic caption generating system, in particular to a selfadaptive VO detection method for continuous voice under the condition that the background noise is changed frequently so as to improve the VO endpoint detection efficiency under a complex noise background.
Abstract: The invention relates to voice detection technology in an automatic caption generating system, in particular to a self-adaptive voice endpoint detection method The method comprises the following steps: dividing an audio sampling sequence into frames with fixed lengths, and forming a frame sequence; extracting three audio characteristic parameters comprising short-time energy, short-time zero-crossing rate and short-time information entropy aiming at data of each frame; calculating short-time energy frequency values of the data of each frame according to the audio characteristic parameters, and forming a short-time energy frequency value sequence; analyzing the short-time energy frequency value sequence from the data of the first frame, and seeking for a pair of voice starting point and ending point; analyzing background noise, and if the background noise is changed, recalculating the audio characteristic parameters of the background noise, and updating the short-time energy frequencyvalue sequence; and repeating the processes till the detection is finished The method can carry out voice endpoint detection for the continuous voice under the condition that the background noise ischanged frequently so as to improve the voice endpoint detection efficiency under a complex noise background

Journal ArticleDOI
TL;DR: A 1.3-megapixel CMOS image sensor with digital correlated double sampling and 17-b column-parallel two-stage folding-integration/cyclic analog-to-digital converters (ADCs) is developed and is demonstrated at the video rate operation of 30 Hz by the new architecture of the proposed ADCs and the high-performance peripheral logic parts using low-voltage differential signaling circuit.
Abstract: A 1.3-megapixel CMOS image sensor (CIS) with digital correlated double sampling and 17-b column-parallel two-stage folding-integration/cyclic analog-to-digital converters (ADCs) is developed. The image sensor has 0.021-erms- vertical fixed pattern noise, 1.2-erms- pixel temporal noise, and 85.0-dB dynamic range using 32 samplings in the folding-integration ADC mode. Despite the large number of samplings (32 times), the prototype image sensor is demonstrated at the video rate operation of 30 Hz by the new architecture of the proposed ADCs and the high-performance peripheral logic (or digital) parts using low-voltage differential signaling circuit. The developed 17-b CIS has no visible quantization noise at very low light level of 0.01 lx because of high grayscale resolution where 1LSB = 0.1-. The implemented CIS using 0.18- μm technology has the sensitivity of 20 V/lx ·s and the pixel conversion gain of 82 μV/e-.