Author
Tanya Chernyakova
Bio: Tanya Chernyakova is an academic researcher from Technion – Israel Institute of Technology. The author has contributed to research in topics: Beamforming & Frequency domain. The author has an hindex of 7, co-authored 17 publications receiving 251 citations.
Papers
More filters
TL;DR: In this paper, the authors extend the concept of beamforming in frequency to a general concept, which allows exploitation of the low bandwidth of the ultrasound signal and bypassing of the oversampling dictated by digital implementation of beamformers in time.
Abstract: Sonography techniques use multiple transducer elements for tissue visualization. Signals received at each element are sampled before digital beamforming. The sampling rates required to perform high-resolution digital beamforming are significantly higher than the Nyquist rate of the signal and result in considerable amount of data that must be stored and processed. A recently developed technique, compressed beamforming, based on the finite rate of innovation model, compressed sensing (CS), and Xampling ideas, allows a reduction in the number of samples needed to reconstruct an image comprised of strong reflectors. A drawback of this method is its inability to treat speckle, which is of significant importance in medical imaging. Here, we build on previous work and extend it to a general concept of beamforming in frequency. This allows exploitation of the low bandwidth of the ultrasound signal and bypassing of the oversampling dictated by digital implementation of beamforming in time. By using beamforming in frequency, the same image quality is obtained from far fewer samples. We next present a CS technique that allows for further rate reduction, using only a portion of the beamformed signal's bandwidth. We demonstrate our methods on in vivo cardiac data and show that reductions up to 1/28 of the standard beamforming rates are possible. Finally, we present an implementation on an ultrasound machine using sub-Nyquist sampling and processing. Our results prove that the concept of sub-Nyquist processing is feasible for medical ultrasound, leading to the potential of considerable reduction in future ultrasound machines' size, power consumption, and cost.
140 citations
TL;DR: This work shows that by performing 3-D beamforming in the frequency domain, sub-Nyquist sampling and low processing rate are achievable, while maintaining adequate image quality, by performing Xampling and frequency domain beamforming.
Abstract: A key step in ultrasound image formation is digital beamforming of signals sampled by several transducer elements placed upon an array. High-resolution digital beamforming introduces the demand for sampling rates significantly higher than the signals’ Nyquist rate, which greatly increases the volume of data that must be transmitted from the system’s front end. In 3-D ultrasound imaging, 2-D transducer arrays rather than 1-D arrays are used, and more scan lines are needed. This implies that the amount of sampled data is vastly increased with respect to 2-D imaging. In this work, we show that a considerable reduction in data rate can be achieved by applying the ideas of Xampling and frequency domain beamforming (FDBF), leading to a sub-Nyquist sampling rate, which uses only a portion of the bandwidth of the ultrasound signals to reconstruct the image. We extend previous work on FDBF for 2-D ultrasound imaging to accommodate the geometry imposed by volumetric scanning and a 2-D grid of transducer elements. High image quality from low-rate samples is demonstrated by simulation of a phantom image composed of several small reflectors. Our technique is then applied to raw data of a heart ventricle phantom obtained by a commercial 3-D ultrasound system. We show that by performing 3-D beamforming in the frequency domain, sub-Nyquist sampling and low processing rate are achievable, while maintaining adequate image quality.
34 citations
TL;DR: This work extends the recently proposed frequency-domain beamforming (FDBF) framework to plane-wave imaging and demonstrates the use of FDBF for shear-wave elastography by generating velocity maps from the beamformed data processed at sub-Nyquist rates.
Abstract: Ultrafast imaging based on coherent plane-wave compounding is one of the most important recent developments in medical ultrasound. It significantly improves the image quality and allows for much faster image acquisition. This technique, however, requires large computational load motivating methods for sampling and processing rate reduction. In this work, we extend the recently proposed frequency-domain beamforming (FDBF) framework to plane-wave imaging. Beamforming in frequency yields the same image quality while using fewer samples. It achieves at least fourfold sampling and processing rate reduction by avoiding oversampling required by standard processing. To further reduce the rate, we exploit the structure of the beamformed signal and use compressed sensing methods to recover the beamformed signal from its partial frequency data obtained at a sub-Nyquist rate. Our approach obtains tenfold rate reduction compared with standard time-domain processing. We verify performance in terms of spatial resolution and contrast based on the scans of a tissue mimicking the phantom obtained by a commercial Aixplorer system. In addition, in vivo carotid and thyroid scans processed using standard beamforming and FDBF are presented for qualitative evaluation and visual comparison. Finally, we demonstrate the use of FDBF for shear-wave elastography by generating velocity maps from the beamformed data processed at sub-Nyquist rates.
29 citations
Posted Content•
TL;DR: In this article, a sub-Nyquist sampling rate was proposed to reduce the amount of data to be transmitted from the system's front end by using a portion of the bandwidth of the ultrasound signals to reconstruct the image.
Abstract: One of the key steps in ultrasound image formation is digital beamforming of signals sampled by several transducer elements placed upon an array. High-resolution digital beamforming introduces the demand for sampling rates significantly higher than the signals' Nyquist rate, which greatly increases the volume of data that must be transmitted from the system's front end. In 3D ultrasound imaging, 2D transducer arrays rather than 1D arrays are used, and more scan-lines are needed. This implies that the amount of sampled data is vastly increased with respect to 2D imaging. In this work we show that a considerable reduction in data rate can be achieved by applying the ideas of Xampling and frequency domain beamforming, leading to a sub-Nyquist sampling rate, which uses only a portion of the bandwidth of the ultrasound signals to reconstruct the image. We extend previous work on frequency domain beamforming for 2D ultrasound imaging to accommodate the geometry imposed by volumetric scanning and a 2D grid of transducer elements. We demonstrate high image quality from low-rate samples by simulation of a phantom image comprised of several small reflectors. We also apply our technique on raw data of a heart ventricle phantom obtained by a commercial 3D ultrasound system. We show that by performing 3D beamforming in the frequency domain, sub-Nyquist sampling and low processing rate are achievable, while maintaining adequate image quality.
23 citations
TL;DR: In this article, a statistical interpretation of beamforming to overcome the limitations of standard delay-and-sum (DAS) processing is presented, where the beamformer output is a maximum a posteriori (MAP) estimator of the signal of interest.
Abstract: We present a statistical interpretation of beamforming to overcome the limitations of standard delay-and-sum (DAS) processing. Both the interference and the signal of interest are viewed as random variables, and the distribution of the signal of interest is exploited to maximize the a posteriori distribution of the aperture signals. In this formulation, the beamformer output is a maximum a posteriori (MAP) estimator of the signal of interest. We provide a closed-form expression for the MAP beamformer and estimate the unknown distribution parameters from the available aperture data using an empirical Bayes approach. We propose a simple scheme that iterates between the estimation of distribution parameters and the computation of the MAP estimator of the signal of interest, leading to an iterative MAP (iMAP) beamformer. This results in a suppression of the interference compared with DAS, without a severe increase in computational complexity or the need for fine-tuning of parameters. The effect of the proposed method on contrast is studied in detail and measured in terms of contrast ratio (CR), contrast-to-noise ratio (CNR), and contrast-to-speckle ratio (CSR). By implementing iMAP on both simulated and experimental data, we show that only 13 transmissions are required to obtain a CNR comparable to DAS with 75 plane waves. Compared to other interference suppression methods, such as coherence factor and scaled Wiener processing, iMAP shows an improved contrast and a better preserved speckle pattern.
18 citations
Cited by
More filters
TL;DR: In this paper, the authors proposed a sub-Nyquist sampling and recovery approach called Doppler focusing, which performs low rate sampling and digital processing, imposes no restrictions on the transmitter, and uses a compressed sensing dictionary with size.
Abstract: We investigate the problem of a monostatic pulse-Doppler radar transceiver trying to detect targets sparsely populated in the radar's unambiguous time-frequency region. Several past works employ compressed sensing (CS) algorithms to this type of problem but either do not address sample rate reduction, impose constraints on the radar transmitter, propose CS recovery methods with prohibitive dictionary size, or perform poorly in noisy conditions. Here, we describe a sub-Nyquist sampling and recovery approach called Doppler focusing, which addresses all of these problems: it performs low rate sampling and digital processing, imposes no restrictions on the transmitter, and uses a CS dictionary with size, which does not increase with increasing number of pulses P. Furthermore, in the presence of noise, Doppler focusing enjoys a signal-to-noise ratio (SNR) improvement, which scales linearly with P, obtaining good detection performance even at SNR as low as - 25 dB. The recovery is based on the Xampling framework, which allows reduction of the number of samples needed to accurately represent the signal, directly in the analog-to-digital conversion process. After sampling, the entire digital recovery process is performed on the low rate samples without having to return to the Nyquist rate. Finally, our approach can be implemented in hardware using a previously suggested Xampling radar prototype.
177 citations
01 Jan 2020
TL;DR: In this article, the authors consider deep learning strategies in ultrasound systems, from the front end to advanced applications, and provide the reader with a broad understanding of the possible impact of deep learning methodologies on many aspects of ultrasound imaging.
Abstract: In this article, we consider deep learning strategies in ultrasound systems, from the front end to advanced applications. Our goal is to provide the reader with a broad understanding of the possible impact of deep learning methodologies on many aspects of ultrasound imaging. In particular, we discuss methods that lie at the interface of signal acquisition and machine learning, exploiting both data structure (e.g., sparsity in some domain) and data dimensionality (big data) already at the raw radio-frequency channel stage. As some examples, we outline efficient and effective deep learning solutions for adaptive beamforming and adaptive spectral Doppler through artificial agents, learn compressive encodings for the color Doppler, and provide a framework for structured signal recovery by learning fast approximations of iterative minimization problems, with applications to clutter suppression and super-resolution ultrasound. These emerging technologies may have a considerable impact on ultrasound imaging, showing promise across key components in the receive processing chain.
168 citations
TL;DR: In this paper, the authors extend the concept of beamforming in frequency to a general concept, which allows exploitation of the low bandwidth of the ultrasound signal and bypassing of the oversampling dictated by digital implementation of beamformers in time.
Abstract: Sonography techniques use multiple transducer elements for tissue visualization. Signals received at each element are sampled before digital beamforming. The sampling rates required to perform high-resolution digital beamforming are significantly higher than the Nyquist rate of the signal and result in considerable amount of data that must be stored and processed. A recently developed technique, compressed beamforming, based on the finite rate of innovation model, compressed sensing (CS), and Xampling ideas, allows a reduction in the number of samples needed to reconstruct an image comprised of strong reflectors. A drawback of this method is its inability to treat speckle, which is of significant importance in medical imaging. Here, we build on previous work and extend it to a general concept of beamforming in frequency. This allows exploitation of the low bandwidth of the ultrasound signal and bypassing of the oversampling dictated by digital implementation of beamforming in time. By using beamforming in frequency, the same image quality is obtained from far fewer samples. We next present a CS technique that allows for further rate reduction, using only a portion of the beamformed signal's bandwidth. We demonstrate our methods on in vivo cardiac data and show that reductions up to 1/28 of the standard beamforming rates are possible. Finally, we present an implementation on an ultrasound machine using sub-Nyquist sampling and processing. Our results prove that the concept of sub-Nyquist processing is feasible for medical ultrasound, leading to the potential of considerable reduction in future ultrasound machines' size, power consumption, and cost.
140 citations
TL;DR: This paper proposes to reconstruct the power spectrum of wideband signals from sub-Nyquist samples, rather than the signal itself as done in previous work, in order to perform detection and derives the minimal sampling rate allowing perfect reconstruction of the signal's power spectrum in a noise-free environment.
Abstract: In light of the ever-increasing demand for new spectral bands and the underutilization of those already allocated, the concept of Cognitive Radio (CR) has emerged. Opportunistic users could exploit temporarily vacant bands after detecting the absence of activity of their owners. One of the crucial tasks in the CR cycle is therefore spectrum sensing and detection which has to be precise and efficient. Yet, CRs typically deal with wideband signals whose Nyquist rates are very high. In this paper, we propose to reconstruct the power spectrum of such signals from sub-Nyquist samples, rather than the signal itself as done in previous work, in order to perform detection. We consider both sparse and non sparse signals as well as blind and non blind detection in the sparse case. For each one of those scenarios, we derive the minimal sampling rate allowing perfect reconstruction of the signal's power spectrum in a noise-free environment and provide power spectrum recovery techniques that achieve those rates. The analysis is performed for two different signal models considered in the literature, which we refer to as the analog and digital models, and shows that both lead to similar results. Simulations demonstrate power spectrum recovery at the minimal rate in noise-free settings and the impact of several parameters on the detector performance, including signal-to-noise ratio, sensing time and sampling rate.
110 citations
TL;DR: A deep neural network is designed to directly process full or subsampled radio frequency data acquired at various subsampling rates and detector configurations so that it can generate high-quality US images using a single beamformer.
Abstract: In ultrasound (US) imaging, various types of adaptive beamforming techniques have been investigated to improve the resolution and the contrast-to-noise ratio of the delay and sum (DAS) beamformers. Unfortunately, the performance of these adaptive beamforming approaches degrades when the underlying model is not sufficiently accurate and the number of channels decreases. To address this problem, here, we propose a deep-learning-based beamformer to generate significantly improved images over widely varying measurement conditions and channel subsampling patterns. In particular, our deep neural network is designed to directly process full or subsampled radio frequency (RF) data acquired at various subsampling rates and detector configurations so that it can generate high-quality US images using a single beamformer. The origin of such input-dependent adaptivity is also theoretically analyzed. Experimental results using the B-mode focused US confirm the efficacy of the proposed methods.
100 citations