scispace - formally typeset
Search or ask a question
Author

Jeff Calder

Bio: Jeff Calder is an academic researcher from University of Minnesota. The author has contributed to research in topics: Sorting & Laplacian matrix. The author has an hindex of 16, co-authored 68 publications receiving 717 citations. Previous affiliations of Jeff Calder include University of California, Berkeley & French Institute for Research in Computer Science and Automation.


Papers
More filters
Journal ArticleDOI
TL;DR: This article proposes a new approach that implements the QBI reconstruction algorithm in real-time using a fast and robust Laplace-Beltrami regularization without sacrificing the optimality of the Kalman filter and proposes a fast algorithm to recursively compute gradient orientation sets whose partial subsets are almost uniform.

56 citations

Posted Content
TL;DR: This work studies input gradient regularization of deep neural networks, and demonstrates that such regularization leads to generalization proofs and improved adversarial robustness.
Abstract: In this work we study input gradient regularization of deep neural networks, and demonstrate that such regularization leads to generalization proofs and improved adversarial robustness. The proof of generalization does not overcome the curse of dimensionality, but it is independent of the number of layers in the networks. The adversarial robustness regularization combines adversarial training, which we show to be equivalent to Total Variation regularization, with Lipschitz regularization. We demonstrate empirically that the regularized models are more robust, and that gradient norms of images can be used for attack detection.

51 citations

Journal ArticleDOI
TL;DR: It is proved that solutions to the graph p-Laplace equation are approximately Holder continuous with high probability and the viscosity solution machinery and the maximum principle on a graph are used.
Abstract: We study the game theoretic p-Laplacian for semi-supervised learning on graphs, and show that it is well-posed in the limit of finite labeled data and infinite unlabeled data. In particular, we show that the continuum limit of graph-based semi-supervised learning with the game theoretic p-Laplacian is a weighted version of the continuous p-Laplace equation. We also prove that solutions to the graph p-Laplace equation are approximately Holder continuous with high probability. Our proof uses the viscosity solution machinery and the maximum principle on a graph.

51 citations

Posted Content
28 Aug 2018
TL;DR: This paper shows that if the usual fidelity term used in training DNNs is augmented by a Lipschitz regularization term, then the networks converge and generalize.
Abstract: Generalization of deep neural networks (DNNs) is an open problem which, if solved, could impact the reliability and verification of deep neural network architectures. In this paper, we show that if the usual fidelity term used in training DNNs is augmented by a Lipschitz regularization term, then the networks converge and generalize. The convergence is in the limit as the number of data points, n → ∞, while also allowing the network to grow as needed to fit the data. Two regimes are identified: in the case of clean labels, we prove convergence to the label function which corresponds to zero loss, in the case of corrupted labels which we prove convergence to a regularized label function which is the solution of a limiting variational problem. In both cases, a convergence rate is also provided.

49 citations

Journal ArticleDOI
03 Dec 2019
TL;DR: It is proved that Lipschitz learning on graphs is consistent with the absolutely minimal LipsChitz extension problem in the limit of infinite unlabeled data and finite labeled data and it is shown that the continuum limit is independent of the distribution of the unlabeling data.
Abstract: We study the consistency of Lipschitz learning on graphs in the limit of infinite unlabeled data and finite labeled data. Previous work has conjectured that Lipschitz learning is well-posed in this...

40 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, the authors studied a random Groeth model in two dimensions closely related to the one-dimensional totally asymmetric exclusion process and showed that shape fluctuations, appropriately scaled, converges in distribution to the Tracy-Widom largest eigenvalue distribution for the Gaussian Unitary Ensemble.
Abstract: We study a certain random groeth model in two dimensions closely related to the one-dimensional totally asymmetric exclusion process. The results show that the shape fluctuations, appropriately scaled, converges in distribution to the Tracy-Widom largest eigenvalue distribution for the Gaussian Unitary Ensemble.

1,031 citations

Journal ArticleDOI
TL;DR: Medical imaging systems: Physical principles and image reconstruction algorithms for magnetic resonance tomography, ultrasound and computer tomography (CT), and applications: Image enhancement, image registration, functional magnetic resonance imaging (fMRI).

536 citations

Journal ArticleDOI
TL;DR: In diffusion MRI, a technique known as diffusion spectrum imaging reconstructs the propagator with a discrete Fourier transform, from a Cartesian sampling of the diffusion signal, providing high angular resolution diffusion imaging.
Abstract: PURPOSE: In diffusion MRI, a technique known as diffusion spectrum imaging reconstructs the propagator with a discrete Fourier transform, from a Cartesian sampling of the diffusion signal. Alternatively, it is possible to directly reconstruct the orientation distribution function in q-ball imaging, providing so-called high angular resolution diffusion imaging. In between these two techniques, acquisitions on several spheres in q-space offer an interesting trade-off between the angular resolution and the radial information gathered in diffusion MRI. A careful design is central in the success of multishell acquisition and reconstruction techniques. METHODS: The design of acquisition in multishell is still an open and active field of research, however. In this work, we provide a general method to design multishell acquisition with uniform angular coverage. This method is based on a generalization of electrostatic repulsion to multishell. RESULTS: We evaluate the impact of our method using simulations, on the angular resolution in one and two bundles of fiber configurations. Compared to more commonly used radial sampling, we show that our method improves the angular resolution, as well as fiber crossing discrimination. DISCUSSION: We propose a novel method to design sampling schemes with optimal angular coverage and show the positive impact on angular resolution in diffusion MRI.

275 citations

Proceedings Article
01 Jan 2010
TL;DR: In this article, the authors derive generalization bounds for learning algorithms based on their robustness: the property that if a testing sample is "similar" to a training sample, then the testing error is close to the training error.
Abstract: We derive generalization bounds for learning algorithms based on their robustness: the property that if a testing sample is "similar" to a training sample, then the testing error is close to the training error. This provides a novel approach, different from complexity or stability arguments, to study generalization of learning algorithms. One advantage of the robustness approach, compared to previous methods, is the geometric intuition it conveys. Consequently, robustness-based analysis is easy to extend to learning in non-standard setups such as Markovian samples or quantile loss. We further show that a weak notion of robustness is both sufficient and necessary for generalizability, which implies that robustness is a fundamental property that is required for learning algorithms to work.

252 citations