scispace - formally typeset
Search or ask a question

Showing papers on "Sampling (signal processing) published in 2019"


Proceedings ArticleDOI
03 Jun 2019
TL;DR: This paper generalizes the previous study by investigating a problem of sampling a stationary Gauss-Markov process named the Ornstein-Uhlenbeck (OU) process, where it aims to find useful insights for solving the problems of sampling more general signals.
Abstract: Recently, a connection between the age of information and remote estimation error was found in a sampling problem of Wiener processes: If the sampler has no knowledge of the signal being sampled, the optimal sampling strategy is to minimize the age of information; however, by exploiting causal knowledge of the signal values, it is possible to achieve a smaller estimation error. In this paper, we generalize the previous study by investigating a problem of sampling a stationary Gauss-Markov process named the Ornstein-Uhlenbeck (OU) process, where we aim to find useful insights for solving the problems of sampling more general signals. The optimal sampling problem is formulated as a constrained continuous-time Markov decision process (MDP) with an uncountable state space. We provide an exact solution to this MDP: The optimal sampling policy is a threshold policy on instantaneous estimation error and the threshold is found. Further, if the sampler has no knowledge of the OU process, the optimal sampling problem reduces to an MDP for minimizing a nonlinear age of information metric and the age-optimal sampling policy is a threshold policy on expected estimation error and the threshold is found. In both problems, the optimal sampling policies can be computed by bisection search, and the curse of dimensionality is circumvented. These results hold for (i) general service time distributions of the queueing server and (ii) sampling problems both with and without a sampling rate constraint. Numerical results are provided to compare different sampling policies.

89 citations


Journal ArticleDOI
23 Sep 2019-Sensors
TL;DR: This work presents a new compressive imaging approach by using a strategy they call cake-cutting, which can optimally reorder the deterministic Hadamard basis and is capable of recovering images of large pixel-size with dramatically reduced sampling ratios, realizing super sub-Nyquist sampling and significantly decreasing the acquisition time.
Abstract: Single-pixel imaging via compressed sensing can reconstruct high-quality images from a few linear random measurements of an object known a priori to be sparse or compressive, by using a point/bucket detector without spatial resolution. Nevertheless, random measurements still have blindness, limiting the sampling ratios and leading to a harsh trade-off between the acquisition time and the spatial resolution. Here, we present a new compressive imaging approach by using a strategy we call cake-cutting, which can optimally reorder the deterministic Hadamard basis. The proposed method is capable of recovering images of large pixel-size with dramatically reduced sampling ratios, realizing super sub-Nyquist sampling and significantly decreasing the acquisition time. Furthermore, such kind of sorting strategy can be easily combined with the structured characteristic of the Hadamard matrix to accelerate the computational process and to simultaneously reduce the memory consumption of the matrix storage. With the help of differential modulation/measurement technology, we demonstrate this method with a single-photon single-pixel camera under the ulta-weak light condition and retrieve clear images through partially obscuring scenes. Thus, this method complements the present single-pixel imaging approaches and can be applied to many fields.

79 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel approach for target parameter estimation in cases where one-bit analog-to-digital-converters, also known as signal comparators with time-varying thresholds, are employed to sample the received radar signal instead of high-resolution ADCs.
Abstract: Target parameter estimation in active sensing, and particularly radar signal processing, is a long-standing problem that has been studied extensively. In this paper, we propose a novel approach for target parameter estimation in cases where one-bit analog-to-digital-converters (ADCs), also known as signal comparators with time-varying thresholds, are employed to sample the received radar signal instead of high-resolution ADCs. The considered problem has potential applications in the design of inexpensive radar and sensing devices in civilian applications, and can likely pave the way for future radar systems employing low-resolution ADCs for faster sampling and high-resolution target determination. We formulate the target estimation as a multivariate weighted-least-squares optimization problem that can be solved in a cyclic manner. Numerical results are provided to exhibit the effectiveness of the proposed algorithms.

57 citations


Posted Content
TL;DR: This paper review, classify, and compare different grasp sampling strategies, based on a fine-grained discretization of SE(3) and uses physics-based simulation to evaluate the quality and robustness of the corresponding parallel-jaw grasps.
Abstract: Robot grasping is often formulated as a learning problem. With the increasing speed and quality of physics simulations, generating large-scale grasping data sets that feed learning algorithms is becoming more and more popular. An often overlooked question is how to generate the grasps that make up these data sets. In this paper, we review, classify, and compare different grasp sampling strategies. Our evaluation is based on a fine-grained discretization of SE(3) and uses physics-based simulation to evaluate the quality and robustness of the corresponding parallel-jaw grasps. Specifically, we consider more than 1 billion grasps for each of the 21 objects from the YCB data set. This dense data set lets us evaluate existing sampling schemes w.r.t. their bias and efficiency. Our experiments show that some popular sampling schemes contain significant bias and do not cover all possible ways an object can be grasped.

49 citations


Journal ArticleDOI
TL;DR: A new time-frequency based PAC (t-f PAC) measure is proposed that is more robust to varying signal parameters and provides a more accurate measure of coupling strength.
Abstract: Oscillatory activity in the brain has been associated with a wide variety of cognitive processes including decision making, feedback processing, and working memory. The high temporal resolution provided by electroencephalography (EEG) enables the study of variation of oscillatory power and coupling across time. Various forms of neural synchrony across frequency bands have been suggested as the mechanism underlying neural binding. Recently, a considerable amount of work has focused on phase-amplitude coupling (PAC)- a form of cross-frequency coupling where the amplitude of a high frequency signal is modulated by the phase of low frequency oscillations. The existing methods for assessing PAC have some limitations including limited frequency resolution and sensitivity to noise, data length and sampling rate due to the inherent dependence on bandpass filtering. In this paper, we propose a new time-frequency based PAC (t-f PAC) measure that can address these issues. The proposed method relies on a complex time-frequency distribution, known as the Reduced Interference Distribution (RID)-Rihaczek distribution, to estimate both the phase and the envelope of low and high frequency oscillations, respectively. As such, it does not rely on bandpass filtering and possesses some of the desirable properties of time-frequency distributions such as high frequency resolution. The proposed technique is first evaluated for simulated data and then applied to an EEG speeded reaction task dataset. The results illustrate that the proposed time-frequency based PAC is more robust to varying signal parameters and provides a more accurate measure of coupling strength.

48 citations


Patent
TL;DR: The quality of the embeddings produced by the self-supervised learning models are evaluated, and it is shown that they can be re-used for a variety of downstream tasks, and for some tasks even approach the performance of fully supervised models of similar size.
Abstract: Systems and methods for training a machine-learned model are provided. A method can include can include obtaining an unlabeled audio signal, sampling the unlabeled audio signal to select one or more sampled slices, inputting the one or more sampled slices into a machine-learned model, receiving, as an output of the machine-learned model, one or more determined characteristics associated with the audio signal, determining a loss function for the machine-learned model based at least in part on a difference between the one or more determined characteristics and one or more corresponding ground truth characteristics of the audio signal, and training the machine-learned model from end to end based at least in part on the loss function. The one or more determined characteristics can include one or more reconstructed portions of the audio signal temporally adjacent to the one or more sampled slices or an estimated distance between two sampled slices.

48 citations


Journal ArticleDOI
TL;DR: In this paper, a new band pass filter design method based on time frequency (TF) analysis is proposed, where a function named "max-TF" is constructed from the TF energy distribution of the de-chirped signal, reflecting the changes of the maximum signal component amplitude with respect to time.
Abstract: The interrupted-sampling repeater jamming (ISRJ) is coherent with an emitted signal, and significantly limits radar's ability to detect, track and recognise targets. This study focuses on the research of ISRJ suppression for linear frequency modulation radars. A new band pass filter design method based on time frequency (TF) analysis is proposed. A function named ‘max-TF’ is constructed from the TF energy distribution of the de-chirped signal, reflecting the changes of the maximum signal component amplitude with respect to time. Based on the ‘max-TF’ function, jamming-free signal segments are automatically and accurately extracted to generate the filter, which is smoothed subsequently. After filtering, jamming signal peaks in pulse compression results are suppressed while real targets are retained simultaneously. Comparing with the state-of-the-art filtering method, the proposed method has improved jamming suppression ability and extended the feasible scope of signal-to-noise ratio and jamming-to-signal ratio conditions. Simulations have validated the improvements and demonstrated how the parameters affect performance. The average signal to jamming improvement and average radar detection rate of the proposed method is about 7.4 dB and 23% higher than those of the state-of-the-art filtering method, respectively. The direction of further works is inferred.

44 citations


Journal ArticleDOI
TL;DR: Modulo folding does not degrade the signal, provided that the sampling rate exceeds the Nyquist rate, and this claim is proved by establishing a connection between the recovery problem of a discrete-time signal from its modulo reduced version and the problem of predicting the next sample of a continuous-time signals from its past.
Abstract: We consider the problem of recovering a continuoustime bandlimited signal from the discrete-time signal, obtained from sampling it every T s seconds and reducing the result modulo Δ, for some Δ > 0. For Δ = ∞, the celebrated Shannon-Nyquist sampling theorem guarantees that perfect recovery is possible, provided that the sampling rate 1/T s exceeds the so-called Nyquist rate. Recent work by Bhandari et al. has shown that for any Δ > 0 perfect reconstruction is still possible, if the sampling rate exceeds the Nyquist rate by a factor of ire. In this letter, we improve upon this result and show that for finite energy signals, perfect recovery is possible for any Δ > 0 and any sampling rate above the Nyquist rate. Thus, modulo folding does not degrade the signal, provided that the sampling rate exceeds the Nyquist rate. This claim is proved by establishing a connection between the recovery problem of a discrete-time signal from its modulo reduced version and the problem of predicting the next sample of a discrete-time signal from its past, and leveraging the fact that for a bandlimited signal the prediction error can be made arbitrarily small.

37 citations


Journal ArticleDOI
TL;DR: A simultaneous spectral sparse (3S) model is proposed to reinforce the structural similarity across different bands and develop an efficient computational reconstruction algorithm to recover the HSHS video.
Abstract: We propose a novel hybrid imaging system to acquire 4D high-speed hyperspectral (HSHS) videos with high spatial and spectral resolution. The proposed system consists of two branches: one branch performs Nyquist sampling in the temporal dimension while integrating the whole spectrum, resulting in a high-frame-rate panchromatic video; the other branch performs compressive sampling in the spectral dimension with longer exposures, resulting in a low-frame-rate hyperspectral video. Owing to the high light throughput and complementary sampling, these two branches jointly provide reliable measurements for recovering the underlying HSHS video. Moreover, the panchromatic video can be used to learn an over-complete 3D dictionary to represent each band-wise video sparsely, thanks to the inherent structural similarity in the spectral dimension. Based on the joint measurements and the self-adaptive dictionary, we further propose a simultaneous spectral sparse (3S) model to reinforce the structural similarity across different bands and develop an efficient computational reconstruction algorithm to recover the HSHS video. Both simulation and hardware experiments validate the effectiveness of the proposed approach. To the best of our knowledge, this is the first time that hyperspectral videos can be acquired at a frame rate up to 100fps with commodity optical elements and under ordinary indoor illumination.

37 citations


Journal ArticleDOI
TL;DR: To obtain a better cancellation of the PLI, a designing approach, generating adaptive notch filter (ANF) of sharp resolution, is proposed, which outperforms conventional notch filters and better preserves the QRS-complex features in the filtered signal.
Abstract: The noise cancellation in electrocardiogram (ECG) signal is very influential to distinguish the essential signal features masked by noises. The power line interference (PLI) is the main source of noise in most of bio-electric signals. Digital notch filters can be used to suppress the PLI in ECG signals. However, the problems of transient interferences and the ringing effect occur, especially when the digitization of PLI does not meet the condition of full period sampling. In this paper, to obtain a better cancellation of the PLI, a designing approach, generating adaptive notch filter (ANF) of sharp resolution, is proposed. The proposed method is concise in algorithm and achieves a more comprehensive reduction of the PLI. It only requires on one fast Fourier transform on the input signal. The spectrum correction method, based on the information from the FFT spectrum of the corrupted signal, is utilized to estimate the harmonic parameters of the PLI. The information of a few main lobe spectral bins in the FFT spectrum is merged such that a compensation signal can be synthesized. By subtracting the compensational signal from the original measurement, the PLI within the investigated signal can substantially reduced. A distinguished advantage of the proposed ANF lies in the fact that no parameters are required to be specified, making the algorithm easier to be implemented. The proposed ANF outperforms conventional notch filters because it not only alleviates the undesirable effects but also better preserves the QRS-complex features in the filtered signal.

37 citations


Journal ArticleDOI
TL;DR: An all-digital background calibration technique for the time skew mismatch in time-interleaved ADCs (TIADCs) and a corresponding filter design method is proposed, which is tailored to meet the target performance and yield.
Abstract: This paper presents an all-digital background calibration technique for the time skew mismatch in time-interleaved ADCs (TIADCs). The technique jointly estimates all of the time skew values by processing the outputs of a bank of correlators. A low-complexity sampling sequence intervention technique, suitable for successive approximation register (SAR) ADC architectures, is proposed to overcome the limitations associated with blind estimation. A two-stage digital correction mechanism based on the Taylor series is proposed to satisfy the target high-precision correction. A quantitative study is performed regarding the requirements imposed on the digital correction circuit in order to satisfy the target performance and yield, and a corresponding filter design method is proposed, which is tailored to meet these requirements. Mitchell’s logarithmic multiplier is adopted for the implementation of the principal multipliers in both the estimation and correction mechanisms, leading to a 25% area and power reduction in the estimation circuit. The proposed calibration is synthesized using a TSMC 28-nm HPL process targeting a 2.4-GHz sampling frequency for an eight-sub-ADC system. The calibration block occupies 0.03 mm2 and consumes 11 mW. The algorithm maintains the SNDR above 65 dB for a sinusoidal input within the target bandwidth.

Journal ArticleDOI
TL;DR: A multi-rate multi-sensor data fusion-based PSDSE framework to utilize the measurements coming from sensors with two different sampling rates that tracks the dynamic states successfully during transient events such as faults is proposed.
Abstract: With the increasing availability of sensors, power system dynamic state estimation (PSDSE) is going to play a critical role in the reliable and efficient operation of power systems. The real-time measurements in today’s power grid are obtained through various types of sensors having different sampling rates, e.g., the traditional SCADA systems with low sampling rates (generally 0.5–2 samples per second), and different groups of phasor measurement units having high sampling rates (usually 30–60 samples per second). We propose a multi-rate multi-sensor data fusion -based PSDSE framework to utilize the measurements coming from sensors with two different sampling rates. The continuous time-domain nonlinear dynamical and measurement equations are discretized at appropriate sampling periods to obtain two discrete models. Two separate estimators are developed using these models. State information of the intermediate time steps of the estimation having coarser sampling period is evaluated using model-based prediction. These two estimations are optimally combined or fused using Bar–Shalom–Campo formula. The proposed algorithm tracks the dynamic states successfully during transient events such as faults. The method is demonstrated by using the standard IEEE-9, 39, 57, and 118 bus systems. The fusion -based state estimator is shown to perform better than the individual state estimators.

Journal ArticleDOI
TL;DR: A new algorithm is presented to image ground moving targets in a synthetic aperture radar (SAR) system based on range frequency reversal transform-fractional Fourier transform (RFRT-FrFT), which can significantly decrease the computational complexity in target envelope migration elimination.
Abstract: In this paper, a new algorithm is presented to image ground moving targets in a synthetic aperture radar (SAR) system based on range frequency reversal transform-fractional Fourier transform (RFRT-FrFT). In this algorithm, a range compressed signal is initially transformed into the range frequency domain and then RFRT is proposed to directly compensate the range migration via multiplying the signal in the range frequency domain by its reversed data according to the equal interval sampling of range frequency variable, which can significantly decrease the computational complexity in target envelope migration elimination. Then, FrFT is applied to accomplish the target motion parameter estimation after range migration alignment. Finally, a ground moving target is well focused after motion compensation. The effectiveness of the proposed algorithm is validated by both simulated and real SAR data.

Journal ArticleDOI
TL;DR: A novel technique based on second order sequence filter and proportional resonant controller is proposed for control of universal active power filter integrated with PV array system (UAPF-PV) with good accuracy in extracting fundamental active component of distorted and unbalanced load currents.
Abstract: In this paper, a novel technique based on a second-order sequence filter and a proportional resonant controller is proposed for the control of a universal active power filter integrated with a photovoltaic array (UAPF-PV). Using a second-order sequence filter and sampling it at zero-crossing instant of the load voltage, the active component of the distorted load current is estimated, which is used to generate a reference signal for a shunt active filter. The proposed method has good accuracy in extracting a fundamental active component of distorted and unbalanced load currents with reduced mathematical computations. Along with power quality improvement, the system also generates clean energy through the PV array system integrated to its dc link. The UAPF-PV integrates benefits of power quality improvement and distributed generation. The system performance is experimentally evaluated on a prototype in the laboratory under a variety of disturbance conditions, such as point of common coupling voltage fall/rise, load unbalancing, and variation in solar irradiation.

Journal ArticleDOI
TL;DR: A new dividerless Type-I sampling PLL, called the RS-PLL, which estimates the voltage-controlled oscillator (VCO) phase error by sampling the reference sine wave with a VCO square wave is demonstrated, and improves upon the simultaneous noise and spur performance achieved by current state-of-the-art clock multipliers.
Abstract: Dividerless synthesizers such as sub-sampling phase-locked loops (PLLs) and injection-locked clock multipliers have demonstrated some of the lowest jitters for a given power consumption (jitter-power ${\text {FoM}}_{j}$ metric). However, they contain a tradeoff between the spur and noise performance, where techniques incorporated for spur reduction adversely affect jitter or power performance. A new dividerless Type-I sampling PLL, called the reference sampling PLL (RS-PLL), which estimates the voltage-controlled oscillator (VCO) phase error by sampling the reference sine wave with a VCO square wave is demonstrated. A clock-and-isolation buffer which accelerates the VCO sine wave to a square wave sampling clock and simultaneously isolates the VCO tank from spur mechanisms in the sampler is included in place of a traditional reference buffer. By combining sampling clock buffer and VCO isolation functionalities into a single block, the RS-PLL eliminates the noise penalty of two separate buffers. The power penalty due to sampling at VCO frequency is restricted by limiting the activity of the switching circuits to the region around the reference zero crossing where the phase error information exists. The prototype RS-PLL implemented in 65-nm CMOS achieves a jitter-power ${\text {FoM}}_{j}$ of <−251 dB between 2.05 and 2.55 GHz with a reference spur of <−66 dBc at 50 MHz. In doing so, it improves upon the simultaneous noise and spur performance achieved by current state-of-the-art clock multipliers.

Journal ArticleDOI
TL;DR: The results show the four-step method is the most efficient phase-shifting strategy and deep-turbulence conditions only degrade performance with respect to insufficient focal-plane array sampling and low signal-to-noise ratios.
Abstract: In this paper, we study the use of digital holography in the on-axis phase-shifting recording geometry for the purposes of deep-turbulence wavefront sensing. In particular, we develop closed-form expressions for the field-estimated Strehl ratio and signal-to-noise ratio for three separate phase-shifting strategies-the four-, three-, and two-step methods. These closed-form expressions compare favorably with our detailed wave-optics simulations, which propagate a point-source beacon through deep-turbulence conditions, model digital holography with noise, and calculate the Monte Carlo averages associated with increasing turbulence strengths and decreasing focal-plane array sampling. Overall, the results show the four-step method is the most efficient phase-shifting strategy and deep-turbulence conditions only degrade performance with respect to insufficient focal-plane array sampling and low signal-to-noise ratios. The results also show the strong reference beam from the local oscillator provided by digital holography greatly improves performance by tens of decibels when compared with the self-referencing interferometer.

Journal ArticleDOI
TL;DR: Source-free all optical sampling, based on the convolution of the signal spectrum with a frequency comb in an electronic-photonic, co-integrated silicon device will be presented for the first time, to the best of the knowledge.
Abstract: Source-free all optical sampling, based on the convolution of the signal spectrum with a frequency comb in an electronic-photonic, co-integrated silicon device will be presented for the first time, to the best of our knowledge. The method has the potential to achieve very high precision, requires only low power and can be fully tunable in the electrical domain. Sampling rates of three and four times the RF bandwidths of the photonics and electronics can be achieved. Thus, the presented method might lead to low-footprint, fully-integrated, precise, electrically tunable, photonic ADCs with very high-analog bandwidths for the digital infrastructure of tomorrow.

Journal ArticleDOI
TL;DR: Artificial neural networks are proposed to use for raw measurement data interpolation and signal shift computation and to demonstrate advantages for wavelength-scanning coherent optical time domain reflectometry (WS-COTDR) and dynamic strain distribution measurement along optical fibers.
Abstract: We propose to use artificial neural networks (ANNs) for raw measurement data interpolation and signal shift computation and to demonstrate advantages for wavelength-scanning coherent optical time domain reflectometry (WS-COTDR) and dynamic strain distribution measurement along optical fibers. The ANNs are trained with synthetic data to predict signal shifts from wavelength scans. Domain adaptation to measurement data is achieved, and standard correlation algorithms are outperformed. First and foremost, the ANN reduces the data analysis time by more than two orders of magnitude, making it possible for the first time to predict strain in real-time applications using the WS-COTDR approach. Further, strain noise and linearity of the sensor response are improved, resulting in more accurate measurements. ANNs also perform better for low signal-to-noise measurement data, for a reduced length of correlation input (i.e., extended distance range), and for coarser sampling settings (i.e., extended strain scanning range). The general applicability is demonstrated for distributed measurement of ground movement along a dark fiber in a telecom cable. The presented ANN-based techniques can be employed to improve the performance of a wide range of correlation or interpolation problems in fiber sensing data analysis and beyond.

Journal ArticleDOI
TL;DR: In this article, a single-pixel imaging technique that enables phase extraction from objects by complex Fourier spectrum sampling is presented, which exploits a digital micromirror device to scan a wavevector-varying plane wave.
Abstract: We present a single-pixel imaging technique that enables phase extraction from objects by complex Fourier spectrum sampling. The technique exploits a digital micromirror device to scan a wavevector-varying plane wave, which interferes with a stationary reference beam to produce time-varying spatial frequencies on the object. Synchronized intensity measurements are made using a single-pixel detector, and four-step phase-shifting is adopted in spectrum acquisition. Applying inverse Fourier transform to the obtained spectrum yields the desired image. The proposed technique is demonstrated by imaging two digital phase objects. Furthermore, we show that the image can be reconstructed from sub-Nyquist measurements via compressive sensing, considerably accelerating the acquisition process. As a particular application, we use the technique to characterize the orbital angular momentum of vortex beams, which could benefit multiplexing techniques in classical and quantum communications. This technique is readily integrated into commercial microscopes for quantitative phase microscopy.

Proceedings ArticleDOI
20 May 2019
TL;DR: The average distortion-sampling rate function is derived, with which the optimal sampling rate can be obtained as well as the minimum average distortion, which is proposed by sacrificing the real-time requirement in a small degree.
Abstract: For the emerging Internet of Things (IoT), one of the most important basic problems is how to reconstruct signals in real-time from a set of under-sampled and delayed samples The sampling omits the details of the signals of interest and the delayed samples against the requirement of real-time As a result, distortion occurs between the interested signal and the reconstructed signal In this paper, we focus on minimizing the average distortion defined as the 1-norm of the difference of the two signals under the scenario that a Poisson counting process is reconstructed in real-time on a remote monitor We derive the average distortion-sampling rate function, with which the optimal sampling rate can be obtained as well as the minimum average distortion To further decrease the average distortion, an algorithm is proposed by sacrificing the real-time requirement in a small degree

Proceedings ArticleDOI
Wenning Jiang1, Yan Zhu1, Minglei Zhang1, Chi-Hang Chan1, Rui P. Martins1 
06 Mar 2019
TL;DR: This work presents a Gm-R based RA which has a complete-settled amplification characteristic, thus allowing us to compensate the gain variation over temperature easily with a tracking bias technique and uses a two-stage amplification to alleviate the RA input parasitic capacitance that enables a small DAC size in all stages.
Abstract: Continuous technology scaling has allowed unceasing growth of the sampling rate of a single channel ADC in past decades. Such development not only helps reduce the number of channels in massively time interleaved ADCs, but also contributes to lower their overall jitter and input capacitance, thus enabling a further push on the ADC performance boundary. Being limited by metastability, the conventional SAR architecture is not suitable when both high resolution and speed are essential. While the pipeline SAR offers a higher speed alternative, it also keeps the low and dynamic power nature of the SAR architecture through adopting dynamic or integrating residue amplifiers (RAs) in recent works [1, 2]. With an integrating characteristic, the amplification time can be short but in contrast, the linearity, extra reset time of the load, and PVT sensitivity pose significant design challenges. In this work, rather than adopting the integratingtype amplifier, we present a Gm-R based RA which has a complete-settled amplification characteristic, thus allowing us to compensate the gain variation over temperature easily with a tracking bias technique. Besides this, we use a two-stage amplification to alleviate the RA input parasitic capacitance that enables a small DAC size in all stages. The single channel prototype reaches 1GS/s with 60.02dB SNDR at a Nyquist input consuming 7.6mW from a 1V supply.

Journal ArticleDOI
TL;DR: Experimental results show that the IGBT turn-off time at different temperatures can be accurately monitored with a highly compressed sampling rate, which validates the feasibility and effectiveness of the proposed compressed sensing method.
Abstract: Condition monitoring (CM) has been considered as a promising technique to improve the reliability of insulated gate bipolar transistors (IGBTs). Among various condition parameters, switching time is a good health status indicator to detect IGBT failures. However, on-line monitoring of the IGBT high-speed switching time is still difficult in practice due to the requirement of the extremely high sampling rate for signal acquisition. To overcome the technical difficulty, this paper provides an innovative compressed sensing (CS) method to achieve equivalent sampling performance for high-speed IGBT switching time monitoring with a lower sampling rate. By utilizing the sparse characteristics of an IGBT switching signal, the sampling rate in the CS method could be far less than the traditional Nyquist sampling rate. To clarify the method, the CM mechanism using IGBT switching time is first analyzed. Then, three key points in the CS method are studied, i.e., the selection of sparsifying basis, the design of a measurement matrix, and the implementation of a reconstruction algorithm. Finally, experiments are carried out to investigate the performance of the CS method for on-line CM. Experimental results show that the IGBT turn- off time at different temperatures can be accurately monitored with a highly compressed sampling rate, which validates the feasibility and effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: A power quality measurement system that provides mains frequency synchronous voltage and current data in the form of 215 samples per cycle using a linear interpolation unit and an evaluation system to get a first impression of the system response to real signals.
Abstract: A power quality measurement system is introduced in this paper While sampling with a high rate, it provides mains frequency synchronous voltage and current data in the form of 215 samples per cycle using a linear interpolation unit The interpolation output sampling rate is provided by a mains frequency estimation unit, which conducts phase locking on the voltage measurements and additionally outputs the synchrophasor, frequency, and the rate of change of frequency This algorithm is checked in simulations against the current standards, while discussing a phenomenon that is not yet taken into account by these standards: low-frequency interharmonic disturbances such as ripple control signals, which occur frequently in today’s power grids To desensitize the system toward these, compromises must be taken when it comes to standard compliance under transient conditions Thereafter, first measurements with an evaluation system are analyzed to get a first impression of the system response to real signals

Journal ArticleDOI
TL;DR: A model with five parameters to model the dispersive wave packet was developed and obtained the parameter vector of each wave packet by the expectation-maximization (EM) algorithm, which can be further applied to locate and evaluate the structure’s damage.

Journal ArticleDOI
Zhikang Shuai1, Junhao Zhang1, Lu Tang1, Zhaosheng Teng1, He Wen1 
TL;DR: A novel algorithm for power system harmonic estimation called frequency shifting and filtering (FSF) algorithm is proposed in this paper, and accurate estimation of harmonics can be achieved because only interested components are retained.
Abstract: Harmonic estimation plays an important part in harmonic suppression of power system. Due to the normal fluctuation of power frequency, it is difficult to realize synchronous sampling. Consequently, the unavoidable productions, i.e., spectral leakage and picket fence effect, will affect the accuracy of harmonic analysis significantly when using the Fourier transform. To overcome this problem, a novel algorithm for power system harmonic estimation called frequency shifting and filtering (FSF) algorithm is proposed in this paper. A reference signal is at first generated to shift the frequency of the sampled signal. Then the iterative averaging filter is adopted to eliminate the spectral interferences. Finally, accurate estimation of harmonics can be achieved because only interested components are retained. Furthermore, a simplified FSF with much less computational burden is presented by using an equivalent weighting filter. The salient features of the proposed algorithm are validated by the simulations and practical experiments.

Journal ArticleDOI
TL;DR: The results of applying diverse signal processing techniques and system designs on the simulated data show that the simulator can be used to qualitatively analyze the collective impact of a variety of those techniques on radar observables for any archived weather scenario.
Abstract: This paper presents a novel, system-level, weather-radar time-series simulator able to ingest archived dual-polarization data and produce time-series data with the desired system and scanning parameters (e.g., antenna patterns, pulse repetition times, spatial sampling, waveform type). Time-series simulations are an important tool for testing signal processing techniques and can also be used to test the changes in system characteristics. The SPARC simulator ingests archived radar-variable data and produces dual-polarization time series with the desired system characteristics. First, the archived data are conditioned to fill in for missing or censored data. Then, based on the six meteorological variables, scattering centers are generated in a grid that matches the desired spatial sampling. For each scattering center, a spectrum shaping technique is used to create time-series data with the desired acquisition parameters. The effects of phase coding, pulse compression, range folding, waveform selection, and antenna patterns are incorporated in the data. In addition to conventionally sampled data, the simulator can produce range-oversampled data with the desired range correlation for range-time processing techniques. The results of applying diverse signal processing techniques and system designs on the simulated data show that the simulator can be used to qualitatively analyze the collective impact of a variety of those techniques on radar observables for any archived weather scenario.

Journal ArticleDOI
TL;DR: This study proposes a filtering method based on stacked bidirectional gated recurrent unit network (SBiGRU) and infinite training to fulfill the ISRJ suppression for pulse compression (PC) radar with linear frequency modulation (LFM) waveform.
Abstract: Interrupted-sampling repeater jamming (ISRJ) is coherent jamming based on digital radio frequency memory (DRFM) device, which repeatedly samples, stores, modulates, and retransmits part of the radar emitted signal, and flexibly forms false targets in the victim radar with relatively low transmitting power. It significantly interferes the radar to detect, track, and recognize targets. There are many electronic counter-countermeasures against ISRJ, among which a series of filtering methods are promising. However, it is not fully addressed. This study proposes a filtering method based on stacked bidirectional gated recurrent unit network (SBiGRU) and infinite training to fulfill the ISRJ suppression for pulse compression (PC) radar with linear frequency modulation (LFM) waveform. SBiGRU method converts signal extraction into a temporal classification problem and accurately extracts the jamming-free signal segments to generate a band pass filter to suppress the ISRJ and retain the real target signal components simultaneously. Comparing with two most advanced filtering methods in the published literature, SBiGRU method has improved the jamming-free signal extraction accuracy, leading to better performances of ISRJ suppression and real targets detection, which are verified by Monte Carlo Simulations.

Proceedings ArticleDOI
01 May 2019
TL;DR: In this paper, an analog-digital hybrid null-steering beamformer was proposed to detect and decode the weak AmBC-modulated signal buried in the strong direct path signals and the noise, without requiring the instantaneous channel state information.
Abstract: In bi-static Ambient Backscatter Communications (AmBC) systems, the receiver needs to operate at a large dynamic range because the direct path from the ambient source to the receiver can be several orders of magnitude stronger than the scattered path modulated by the AmBC device. In this paper, we propose a novel analog-digital hybrid null-steering beamformer which allows the backscatter receiver to detect and decode the weak AmBC-modulated signal buried in the strong direct path signals and the noise, without requiring the instantaneous channel state information. The analog cancellation of the strong signal components allows the receiver automatic gain control to adjust to the level of the weak AmBC signals. This hence allows common analog-to-digital converters to be used for sampling the signal. After cancelling the strong components, the ambient source signal appears as zero mean fast fading from the AmBC system point of view. We use the direct path signal component to track the phase of the unknown ambient signal. In order to avoid channel estimation, we propose AmBC to use orthogonal channelization codes. The results show that the design allows the AmBC receiver to detect the backscatter binary phase shift keying signals without decoding the ambient signals and requiring knowledge of the instantaneous channel state information.

Journal ArticleDOI
TL;DR: An improved analytical small signal model of the line commutated converter (LCC) is developed in direct-quadrature-zero coordinates based on the assumption of infinite six-pulse converters and is shown to have significantly better accuracy in system stability determination.
Abstract: In this paper, an improved analytical small signal model of the line commutated converter (LCC) is developed in direct-quadrature-zero coordinates based on the assumption of infinite six-pulse converters. The dynamics of the commutation inductance and transportation delays during commutation are included in the model, and the sampling of the measurement of firing and extinction angles is added to accommodate actual converters with a finite pulse number (e.g., 12-pulse). The model is validated by comparing the frequency response with one obtained by frequency scanning in an electromagnetic transients (EMT) simulation. The model is shown to be effective in investigating system stability when the LCC is connected to an arbitrary external network. To do this, the frequency response of the combined system is plotted, and the generalized Nyquist stability criterion is applied. This stability result is validated by a time domain simulation on an EMT program using the first CIGRE HVDC benchmark system and the IEEE 14-bus system. Compared to the traditional LCC model, the proposed model is shown to have significantly better accuracy in system stability determination.

Journal ArticleDOI
TL;DR: In this paper, it was shown that for binary measurements and wavelet reconstruction, the stable sampling rate is linear, which implies that binary measurements are as efficient as Fourier samples when using wavelets as the reconstruction space.
Abstract: This paper is concerned with the problem of reconstructing an infinite-dimensional signal from a limited number of linear measurements. In particular, we show that for binary measurements (modelled with Walsh functions and Hadamard matrices) and wavelet reconstruction the stable sampling rate is linear. This implies that binary measurements are as efficient as Fourier samples when using wavelets as the reconstruction space. Powerful techniques for reconstructions include generalized sampling and its compressed versions, as well as recent methods based on data assimilation. Common to these methods is that the reconstruction quality depends highly on the subspace angle between the sampling and the reconstruction space, which is dictated by the stable sampling rate. As a result of the theory provided in this paper, these methods can now easily use binary measurements and wavelet reconstruction bases.