scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Signal Processing Magazine in 2020"


Journal ArticleDOI
TL;DR: In this paper, the authors discuss the unique characteristics and challenges of federated learning, provide a broad overview of current approaches, and outline several directions of future work that are relevant to a wide range of research communities.
Abstract: Federated learning involves training statistical models over remote devices or siloed data centers, such as mobile phones or hospitals, while keeping data localized. Training in heterogeneous and potentially massive networks introduces novel challenges that require a fundamental departure from standard approaches for large-scale machine learning, distributed optimization, and privacy-preserving data analysis. In this article, we discuss the unique characteristics and challenges of federated learning, provide a broad overview of current approaches, and outline several directions of future work that are relevant to a wide range of research communities.

2,163 citations


Journal ArticleDOI
TL;DR: This article reviews MIMO radar basics, highlighting the features that make this technology a good fit for automotive radar and reviewing important theoretical results for increasing the angular resolution.
Abstract: Important requirements for automotive radar are high resolution, low hardware cost, and small size. Multiple-input, multiple-output (MIMO) radar technology has been receiving considerable attention from automotive radar manufacturers because it can achieve a high angular resolution with relatively small numbers of antennas. For that ability, it has been exploited in the current-generation automotive radar for advanced driverassistance systems (ADAS) as well as in next-generation highresolution imaging radar for autonomous driving. This article reviews MIMO radar basics, highlighting the features that make this technology a good fit for automotive radar and reviewing important theoretical results for increasing the angular resolution. The article also describes challenges arising during the application of existing MIMO radar theory to automotive radar that provide interesting problems for signal processing researchers.

241 citations


Journal ArticleDOI
TL;DR: Several signal processing issues for maximizing the potential of deep reconstruction in fast MRI are discussed, which may facilitate further development of the networks and performance analysis from a theoretical point of view.
Abstract: Image reconstruction from undersampled k-space data has been playing an important role in fast magnetic resonance imaging (MRI). Recently, deep learning has demonstrated tremendous success in various fields and also shown potential in significantly accelerating MRI reconstruction with fewer measurements. This article provides an overview of deep-learning-based image reconstruction methods for MRI. Two types of deep-learningbased approaches are reviewed, those that are based on unrolled algorithms and those that are not, and the main structures of both are explained. Several signal processing issues for maximizing the potential of deep reconstruction in fast MRI are discussed, which may facilitate further development of the networks and performance analysis from a theoretical point of view.

232 citations


Journal ArticleDOI
TL;DR: Dualfunction radar-communications (DFRC) designs are the focus of a large body of recent work and can lead to substantial gains in size, cost, power consumption, robustness, and performance, especially when both radar and communications operate in the same range, which is the case in vehicular applications.
Abstract: Self-driving cars constantly assess their environment to choose routes, comply with traffic regulations, and avoid hazards. To that aim, such vehicles are equipped with wireless communications transceivers as well as multiple sensors, including automotive radars. The fact that autonomous vehicles implement both radar and communications motivates designing these functionalities in a joint manner. Such dualfunction radar-communications (DFRC) designs are the focus of a large body of recent work. These approaches can lead to substantial gains in size, cost, power consumption, robustness, and performance, especially when both radar and communications operate in the same range, which is the case in vehicular applications.

208 citations


Journal ArticleDOI
TL;DR: An overview of the recent machine-learning approaches that have been proposed specifically for improving parallel imaging is provided and a general background introduction to parallel MRI is given and structured around the classical view of image- and k-space-based methods.
Abstract: Following the success of deep learning in a wide range of applications, neural network-based machine-learning techniques have received interest as a means of accelerating magnetic resonance imaging (MRI). A number of ideas inspired by deeplearning techniques for computer vision and image processing have been successfully applied to nonlinear image reconstruction in the spirit of compressed sensing for both low-dose computed tomography and accelerated MRI. The additional integration of multicoil information to recover missing k-space lines in the MRI reconstruction process is studied less frequently, even though it is the de facto standard for the currently used accelerated MR acquisitions. This article provides an overview of the recent machine-learning approaches that have been proposed specifically for improving parallel imaging. A general background introduction to parallel MRI is given and structured around the classical view of image- and k-space-based methods. Linear and nonlinear methods are covered, followed by a discussion of the recent efforts to further improve parallel imaging using machine learning and, specifically, artificial neural networks. Image domain-based techniques that introduce improved regularizers are covered as well as k-space-based methods, where the focus is on better interpolation strategies using neural networks. Issues and open problems are discussed and recent efforts for producing open data sets and benchmarks for the community are examined.

204 citations


Journal ArticleDOI
You Li1, Javier Ibanez-Guzman1
TL;DR: A review of state-of-the-art automotive lidar technologies and the perception algorithms used with them and the limitations, challenges, and trends for automotive lidars and perception systems.
Abstract: Autonomous vehicles rely on their perception systems to acquire information about their immediate surroundings. It is necessary to detect the presence of other vehicles, pedestrians, and other relevant entities. Safety concerns and the need for accurate estimations have led to the introduction of lidar systems to complement camera- or radar-based perception systems. This article presents a review of state-of-the-art automotive lidar technologies and the perception algorithms used with those technologies. Lidar systems are introduced first by analyzing such a system?s main components, from laser transmitter to beamscanning mechanism. The advantages/disadvantages and the current status of various solutions are introduced and compared. Then, the specific perception pipeline for lidar data processing is detailed from an autonomous vehicle perspective. The model-driven approaches and emerging deep learning (DL) solutions are reviewed. Finally, we provide an overview of the limitations, challenges, and trends for automotive lidars and perception systems.

178 citations


Journal ArticleDOI
TL;DR: It is expected that this article will serve as a starting point for new researchers and engineers in the autonomous driving field and provide a bird's-eye view to both neuromorphic vision and autonomous driving research communities.
Abstract: As a bio-inspired and emerging sensor, an event-based neuromorphic vision sensor has a different working principle compared to the standard frame-based cameras, which leads to promising properties of low energy consumption, low latency, high dynamic range (HDR), and high temporal resolution It poses a paradigm shift to sense and perceive the environment by capturing local pixel-level light intensity changes and producing asynchronous event streams Advanced technologies for the visual sensing system of autonomous vehicles from standard computer vision to event-based neuromorphic vision have been developed In this tutorial-like article, a comprehensive review of the emerging technology is given First, the course of the development of the neuromorphic vision sensor that is derived from the understanding of biological retina is introduced The signal processing techniques for event noise processing and event data representation are then discussed Next, the signal processing algorithms and applications for event-based neuromorphic vision in autonomous driving and various assistance systems are reviewed Finally, challenges and future research directions are pointed out It is expected that this article will serve as a starting point for new researchers and engineers in the autonomous driving field and provide a bird's-eye view to both neuromorphic vision and autonomous driving research communities

162 citations


Journal ArticleDOI
TL;DR: This article describes the use of plug-and-play (PnP) algorithms for MRI image recovery and describes how the result of the PnP method can be interpreted as a solution to an equilibrium equation, allowing convergence analysis from this perspective.
Abstract: Magnetic resonance imaging (MRI) is a noninvasive diagnostic tool that provides excellent soft-tissue contrast without the use of ionizing radiation. Compared to other clinical imaging modalities (e.g., computed tomography or ultrasound), however, the data acquisition process for MRI is inherently slow, which motivates undersampling; thus, there is a need for accurate, efficient reconstruction methods from undersampled data sets. In this article, we describe the use of plug-and-play (PnP) algorithms for MRI image recovery. We first describe the linearly approximated inverse problem encountered in MRI. Then, we review several PnP methods for which the unifying commonality is to iteratively call a denoising subroutine as one step of a larger optimization-inspired algorithm. Next, we describe how the result of the PnP method can be interpreted as a solution to an equilibrium equation, allowing convergence analysis from this perspective. Finally, we present illustrative examples of PnP methods applied to MRI image recovery.

157 citations


Journal ArticleDOI
TL;DR: In this paper, the signal of interest can be modeled as a linear superposition of translated or modulated versions of some template [e.g., a point spread function (PSF) or a Green's function] and the fundamental problem is to estimate the translation or modulation parameters (i.e., delays, locations, or Dopplers) from noisy measurements.
Abstract: At the core of many sensing and imaging applications, the signal of interest can be modeled as a linear superposition of translated or modulated versions of some template [e.g., a point spread function (PSF) or a Green's function] and the fundamental problem is to estimate the translation or modulation parameters (e.g., delays, locations, or Dopplers) from noisy measurements. This problem is centrally important to not only target localization in radar and sonar, channel estimation in wireless communications, and direction-of-arrival estimation in array signal processing, but also modern imaging modalities such as superresolution single-molecule fluorescence microscopy, nuclear magnetic resonance imaging, and spike localization in neural recordings, among others.

112 citations


Journal ArticleDOI
TL;DR: An overview of distributed gradient methods for solving convex machine learning problems that typically involve two update steps: a gradient step based on the agent local objective function and a mixing step that essentially diffuses relevant information from one to all other agents in the network.
Abstract: This article provides an overview of distributed gradient methods for solving convex machine learning problems of the form min x i R n (1/ m ) s i = 1 m f i ( x ) in a system consisting of m agents that are embedded in a communication network. Each agent i has a collection of data captured by its privately known objective function f i ( x ). The distributed algorithms considered here obey two simple rules: privately known agent functions f i ( x ) cannot be disclosed to any other agent in the network and every agent is aware of the local connectivity structure of the network, i.e., it knows its one-hop neighbors only. While obeying these two rules, the distributed algorithms that agents execute should find a solution to the overall system problem with the limited knowledge of the objective function and limited local communications. Given in this article is an overview of such algorithms that typically involve two update steps: a gradient step based on the agent local objective function and a mixing step that essentially diffuses relevant information from one to all other agents in the network.

105 citations


Journal ArticleDOI
TL;DR: This tutorial reviews the classical CS formulation and outline steps needed to transform this formulation into a deep-learning-based reconstruction framework, and discusses considerations in applying unrolled neural networks in the clinical setting.
Abstract: Compressed sensing (CS) reconstruction methods leverage sparse structure in underlying signals to recover high-resolution images from highly undersampled measurements. When applied to magnetic resonance imaging (MRI), CS has the potential to dramatically shorten MRI scan times, increase diagnostic value, and improve the overall patient experience. However, CS has several shortcomings that limit its clinical translation. These include 1) artifacts arising from inaccurate sparse modeling assumptions, 2) extensive parameter tuning required for each clinical application, and 3) clinically infeasible reconstruction times. Recently, CS has been extended to incorporate deep neural networks as a way of learning complex image priors from historical exam data. Commonly referred to as unrolled neural networks, these techniques have proven to be a compelling and practical approach to address the challenges of sparse CS. In this tutorial, we review the classical CS formulation and outline steps needed to transform this formulation into a deep-learning-based reconstruction framework. Supplementary open-source code in Python is used to demonstrate this approach with open databases. Further, we discuss considerations in applying unrolled neural networks in the clinical setting.

Journal ArticleDOI
TL;DR: Several key models and optimization algorithms for MR image reconstruction that have been approved for clinical use and are being considered in the research community that use data-adaptive regularizers are reported on.
Abstract: The development of compressed-sensing (CS) methods for magnetic resonance (MR) image reconstruction led to an explosion of research on models and optimization algorithms for MR imaging (MRI). Roughly 10 years after such methods first appeared in the MRI literature, the U.S. Food and Drug Administration (FDA) approved certain CS methods for commercial use, making CS a clinical success story for MRI. This article reports on several key models and optimization algorithms for MR image reconstruction. Included are both methods that the FDA has approved for clinical use and more recent methods being considered in the research community that use data-adaptive regularizers. It presents in a single survey the many algorithms devised to exploit the structure of the system model and regularizers used in MRI.

Journal ArticleDOI
TL;DR: A unified algorithmic framework that combines variance reduction with gradient tracking to achieve robust performance and fast convergence and provides explicit theoretical guarantees of the corresponding methods when the objective functions are smooth and strongly convex.
Abstract: Decentralized methods to solve finite-sum minimization problems are important in many signal processing and machine learning tasks where the data samples are distributed across a network of nodes, and raw data sharing is not permitted due to privacy and/or resource constraints. In this article, we review decentralized stochastic first-order methods and provide a unified algorithmic framework that combines variance reduction with gradient tracking to achieve robust performance and fast convergence. We provide explicit theoretical guarantees of the corresponding methods when the objective functions are smooth and strongly convex and show their applicability to nonconvex problems via numerical experiments. Throughout the article, we provide intuitive illustrations of the main technical ideas by casting appropriate tradeoffs and comparisons among the methods of interest and by highlighting applications to decentralized training of machine learning models.

Journal ArticleDOI
TL;DR: The role of graph convolutional filters in GNNs is discussed and it is shown that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology.
Abstract: Network data can be conveniently modeled as a graph signal, where data values are assigned to nodes of a graph that describes the underlying network topology. Successful learning from network data is built upon methods that effectively exploit this graph structure. In this article, we leverage graph signal processing (GSP) to characterize the representation space of graph neural networks (GNNs). We discuss the role of graph convolutional filters in GNNs and show that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology. These two properties offer insight about the workings of GNNs and help explain their scalability and transferability properties, which, coupled with their local and distributed nature, make GNNs powerful tools for learning in physical networks. We also introduce GNN extensions using edge-varying and autoregressive moving average (ARMA) graph filters and discuss their properties. Finally, we study the use of GNNs in recommender systems and learning decentralized controllers for robot swarms.

Journal ArticleDOI
TL;DR: This article analyzes automotive radar interference and proposes several new approaches that combine industrial and academic expertise toward the goal of achieving interference-free autonomous driving (AD).
Abstract: Autonomous driving relies on a variety of sensors, especially radars, which have unique robustness under heavy rain/fog/snow and poor light conditions. With the rapid increase of the amount of radars used on modern vehicles, where most radars operate in the same frequency band, the risk of radar interference becomes a compelling issue. This article analyzes automotive radar interference and proposes several new approaches that combine industrial and academic expertise toward the goal of achieving interference-free autonomous driving (AD).

Journal ArticleDOI
TL;DR: The main working principles of SPL are discussed and recent advances in signal processing techniques for this modality are summarized, highlighting promising applications in AVs as well as a number of challenges for vehicular lidar that cannot be solved by better hardware alone.
Abstract: The safety and success of autonomous vehicles (AVs) depend on their ability to accurately map and respond to their surroundings in real time. One of the most promising recent technologies for depth mapping is single-photon lidar (SPL), which measures the time of flight of individual photons. The long-range capabilities (kilometers), excellent depth resolution (centimeters), and use of low-power (eye-safe) laser sources renders this modality a strong candidate for use in AVs. While presenting unique opportunities, the remarkable sensitivity of single-photon detectors introduces several signal processing challenges. The discrete nature of photon counting and the particular design of the detection devices means the acquired signals cannot be treated as arising in a linear system with additive Gaussian noise. Moreover, the number of useful photon detections may be small despite a large data volume, thus requiring careful modeling and algorithmic design for real-time performance. This article discusses the main working principles of SPL and summarizes recent advances in signal processing techniques for this modality, highlighting promising applications in AVs as well as a number of challenges for vehicular lidar that cannot be solved by better hardware alone.

Journal ArticleDOI
TL;DR: MTL is an approach to inductive transfer learning (using what is learned for one problem to assist with another problem), and it helps improve generalization performance relative to learning each task separately by using the domain information contained in the training signals of related tasks as an inductive bias.
Abstract: The problem of simultaneously learning several related tasks has received considerable attention in several domains, especially in machine learning, with the so-called multitask learning (MTL) problem, or learning to learn problem [1], [2]. MTL is an approach to inductive transfer learning (using what is learned for one problem to assist with another problem), and it helps improve generalization performance relative to learning each task separately by using the domain information contained in the training signals of related tasks as an inductive bias. Several strategies have been derived within this community under the assumption that all data are available beforehand at a fusion center.

Journal ArticleDOI
TL;DR: This article divides statistical inference and learning algorithms into two broad categories, namely, distributed algorithms and decentralized algorithms (see "Is It Distributed or Is It Decentralized?").
Abstract: Statistical inference and machine-learning algorithms have traditionally been developed for data available at a single location. Unlike this centralized setting, modern data sets are increasingly being distributed across multiple physical entities (sensors, devices, machines, data centers, and so on) for a multitude of reasons that range from storage, memory, and computational constraints to privacy concerns and engineering needs. This has necessitated the development of inference and learning algorithms capable of operating on noncolocated data. For this article, we divide such algorithms into two broad categories, namely, distributed algorithms and decentralized algorithms (see "Is It Distributed or Is It Decentralized?").

Journal ArticleDOI
TL;DR: In this article, a review of recent findings and results on the global landscape of neural networks is reviewed.
Abstract: One of the major concerns for neural network training is that the nonconvexity of the associated loss functions may cause a bad landscape. The recent success of neural networks suggests that their loss landscape is not too bad, but what specific results do we know about the landscape? In this article, we review recent findings and results on the global landscape of neural networks.

Journal ArticleDOI
TL;DR: The study of sampling signals on graphs with the goal of building an analog of sampling for standard signals in the time and spatial domains is reviewed, focusing on theory and potential applications.
Abstract: The study of sampling signals on graphs, with the goal of building an analog of sampling for standard signals in the time and spatial domains, has attracted considerable attention recently. Beyond adding to the growing theory on graph signal processing (GSP), sampling on graphs has various promising applications. In this article, we review the current progress on sampling over graphs, focusing on theory and potential applications.

Journal ArticleDOI
TL;DR: The intuitions and connections behind a core set of popular distributed algorithms are described, emphasizing how to balance computation and communication costs.
Abstract: Distributed learning has become a critical enabler of the massively connected world that many people envision. This article discusses four key elements of scalable distributed processing and real-time intelligence: problems, data, communication, and computation. Our aim is to provide a unique perspective of how these elements should work together in an effective and coherent manner. In particular, we selectively review recent techniques developed for optimizing nonconvex models (i.e., problem classes) that process batch and streaming data (data types) across networks in a distributed manner (communication and computation paradigm). We describe the intuitions and connections behind a core set of popular distributed algorithms, emphasizing how to balance computation and communication costs. Practical issues and future research directions will also be discussed.

Journal ArticleDOI
TL;DR: This article focuses on special types of online algorithms that can handle time-varying or online first-order optimization methods, with emphasis on machine leaning and signal processing as well as data-driven control.
Abstract: There is a growing cross-disciplinary effort in the broad domain of optimization and learning with streams of data, applied to settings where traditional batch optimization techniques cannot produce solutions at time scales that match the interarrival times of the data points due to computational and/or communication bottlenecks. Special types of online algorithms can handle this situation, and this article focuses on such time-varying optimization algorithms, with emphasis on machine leaning (ML) and signal processing (SP) as well as data-driven control (DDC). Approaches for the design of time-varying or online first-order optimization methods are discussed, with emphasis on algorithms that can handle errors in the gradient, as may arise when the gradient is estimated. Insights into performance metrics and accompanying claims are provided, along with evidence of cases where algorithms that are provably convergent in batch optimization may perform poorly in an online regime. The role of distributed computation is discussed. Illustrative numerical examples for a number of applications of broad interest are provided to convey key ideas.

Journal ArticleDOI
TL;DR: Fueled by recent advances in deep neural networks, reinforcement learning (RL) has been in the limelight because of many recent breakthroughs in artificial intelligence, including defeating humans in games, self-driving cars, smart-home automation, and service robots.
Abstract: Fueled by recent advances in deep neural networks, reinforcement learning (RL) has been in the limelight because of many recent breakthroughs in artificial intelligence, including defeating humans in games (e.g., chess, Go, StarCraft), self-driving cars, smart-home automation, and service robots, among many others. Despite these remarkable achievements, many basic tasks can still elude a single RL agent. Examples abound, from multiplayer games, multirobots, cellular-antenna tilt control, traffic-control systems, and smart power grids to network management.

Journal ArticleDOI
TL;DR: In this paper, the authors present a unified framework for incorporating various Transform Learning (TL) based models and discuss the connections between TL and convolutional or filter-bank models and corresponding multilayer extensions, with connections to deep learning.
Abstract: Magnetic resonance imaging (MRI) is widely used in clinical practice, but it has been traditionally limited by its slow data acquisition. Recent advances in compressed sensing (CS) techniques for MRI reduce acquisition time while maintaining high image quality. Whereas classical CS assumes the images are sparse in known analytical dictionaries or transform domains, methods using learned image models for reconstruction have become popular. The model could be prelearned from data sets or learned simultaneously with the reconstruction, i.e., blind CS (BCS). Besides the well-known synthesis dictionary model, recent advances in transform learning (TL) provide an efficient alternative framework for sparse modeling in MRI. TL-based methods enjoy numerous advantages, including exact sparse-coding, transform-update, and clustering solutions; cheap computation; and convergence guarantees; and they provide high-quality results in MRI compared to popular competing methods. This article reviews some recent works in MRI reconstruction from limited data, with a focus on the recent TL-based methods. We present a unified framework for incorporating various TL-based models and discuss the connections between TL and convolutional or filter-bank models and corresponding multilayer extensions, with connections to deep learning. Finally, we discuss recent trends in MRI, open problems, and future directions for the field.

Journal ArticleDOI
TL;DR: In recent years, an abundance of new molecular structures have been elucidated using cryo-electron microscopy (cryo-EM), largely due to advances in hardware technology and data processing techniques.
Abstract: In recent years, an abundance of new molecular structures have been elucidated using cryo-electron microscopy (cryo-EM), largely due to advances in hardware technology and data processing techniques. Owing to these exciting new developments, cryo-EM was selected by Nature Methods as the "Method of the Year 2015," and the Nobel Prize in Chemistry 2017 was awarded to three pioneers in the cryo-EM field: Jacques Dubochet, Joachim Frank, and Richard Henderson "for developing cryoelectron microscopy for the high-resolution structure determination of biomolecules in solution" [93].

Journal ArticleDOI
TL;DR: Zeroth-order (ZO) optimization as discussed by the authors is a subset of gradient-free optimization that emerges in many signal processing and machine learning (ML) applications and is used for solving optimization problems similarly to gradient-based methods.
Abstract: Zeroth-order (ZO) optimization is a subset of gradient-free optimization that emerges in many signal processing and machine learning (ML) applications. It is used for solving optimization problems similarly to gradient-based methods. However, it does not require the gradient, using only function evaluations. Specifically, ZO optimization iteratively performs three major steps: gradient estimation, descent direction computation, and the solution update. In this article, we provide a comprehensive review of ZO optimization, with an emphasis on showing the underlying intuition, optimization principles, and recent advances in convergence analysis. Moreover, we demonstrate promising applications of ZO optimization, such as evaluating robustness and generating explanations from black-box deep learning (DL) models and efficient online sensor management.

Journal ArticleDOI
TL;DR: A major line of work in graph signal processing during the past 10 years has been to design new transform methods that account for the underlying graph structure to identify and exploit structure in data residing on a connected, weighted, undirected graph.
Abstract: A major line of work in graph signal processing [2] during the past 10 years has been to design new transform methods that account for the underlying graph structure to identify and exploit structure in data residing on a connected, weighted, undirected graph. The most common approach is to construct a dictionary of atoms (building block signals) and represent the graph signal of interest as a linear combination of these atoms. Such representations enable visual analysis of data, statistical analysis of data, and data compression, and they can also be leveraged as regularizers in machine learning and ill-posed inverse problems, such as inpainting, denoising, and classification.

Journal ArticleDOI
TL;DR: This article provides a detailed review of recent advances in the recovery of continuous-domain multidimensional signals from their few nonuniform (multichannel) measurements using structured low-rank (SLR) matrix completion formulation, and demonstrates the utility of the framework in a wide range of MR imaging applications.
Abstract: In this article, we provide a detailed review of recent advances in the recovery of continuous-domain multidimensional signals from their few nonuniform (multichannel) measurements using structured low-rank (SLR) matrix completion formulation. This framework is centered on the fundamental duality between the compactness (e.g., sparsity) of the continuous signal and the rank of a structured matrix, whose entries are functions of the signal. This property enables the reformulation of the signal recovery as an SLR matrix completion problem, which includes performance guarantees. We also review fast algorithms that are comparable in complexity to current compressed sensing (CS) methods, which enable the framework's application to large-scale magnetic resonance (MR) recovery problems. The remarkable flexibility of the formulation can be used to exploit signal properties that are difficult to capture by current sparse and low-rank optimization strategies. We demonstrate the utility of the framework in a wide range of MR imaging (MRI) applications, including highly accelerated imaging, calibration-free acquisition, MR artifact correction, and ungated dynamic MRI.

Journal ArticleDOI
Mariya Doneva1
TL;DR: This work states that non-Cartesian acquisition and iterative reconstruction techniques were not adopted in clinical MRI for many years, and, even today, their use is very limited.
Abstract: Since its inception in the early 1970s [1], magnetic resonance imaging (MRI) has revolutionized radiology and medicine. Apart from high-quality data acquisition, image reconstruction is an important step to guarantee high image quality in MRI. Although the very first MR images were obtained from data resembling radial projections of the imaged object by applying an iterative reconstruction algorithm [1], non-Cartesian acquisition and iterative reconstruction techniques were not adopted in clinical MRI for many years, and, even today, their use is very limited. The reason for this is twofold. First, the underlying assumption that the measured data are radial projections of the imaged object fails in the presence of B0 field inhomogeneity and/or gradient waveform imperfections. Second, the long reconstruction times associated with iterative reconstruction algorithms limit their practical application.

Journal ArticleDOI
TL;DR: In this paper, the authors explore how graph signal processing (GSP) can be used to extend CNN components to graphs to improve model performance and how to design the graph CNN architecture based on the topology or structure of the data graph.
Abstract: Deep learning, particularly convolutional neural networks (CNNs), has yielded rapid, significant improvements in computer vision and related domains. But conventional deep learning architectures perform poorly when data have an underlying graph structure, as in social, biological, and many other domains. This article explores 1) how graph signal processing (GSP) can be used to extend CNN components to graphs to improve model performance and 2) how to design the graph CNN architecture based on the topology or structure of the data graph.