scispace - formally typeset
Search or ask a question
Author

Stanley Osher

Bio: Stanley Osher is an academic researcher from University of California, Los Angeles. The author has contributed to research in topics: Level set method & Hyperbolic partial differential equation. The author has an hindex of 114, co-authored 510 publications receiving 104028 citations. Previous affiliations of Stanley Osher include University of Minnesota & University of Innsbruck.


Papers
More filters
Posted Content
TL;DR: Li et al. as discussed by the authors investigated a utility enhancement scheme based on Laplacian smoothing for differentially private federated learning (DP-Fed-LS), where the parameter aggregation with injected Gaussian noise is improved in statistical precision without losing privacy budget.
Abstract: Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users. However, an adversary may still be able to infer the private training data by attacking the released model. Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models. In this paper, we investigate a utility enhancement scheme based on Laplacian smoothing for differentially private federated learning (DP-Fed-LS), where the parameter aggregation with injected Gaussian noise is improved in statistical precision without losing privacy budget. Our key observation is that the aggregated gradients in federated learning often enjoy a type of smoothness, i.e. sparsity in the graph Fourier basis with polynomial decays of Fourier coefficients as frequency grows, which can be exploited by the Laplacian smoothing efficiently. Under a prescribed differential privacy budget, convergence error bounds with tight rates are provided for DP-Fed-LS with uniform subsampling of heterogeneous Non-IID data, revealing possible utility improvement of Laplacian smoothing in effective dimensionality and variance reduction, among others. Experiments over MNIST, SVHN, and Shakespeare datasets show that the proposed method can improve model accuracy with DP-guarantee and membership privacy under both uniform and Poisson subsampling mechanisms.

4 citations

Posted Content
05 Aug 2020
TL;DR: A new mechanism, called adversarial projection, is presented that projects a given signal onto the intrinsically low dimensional manifold of true data, which can be used for solving inverse problems, which consists of recovering a signal from a collection of noisy measurements.
Abstract: We present a new mechanism, called adversarial projection, that projects a given signal onto the intrinsically low dimensional manifold of true data. This operator can be used for solving inverse problems, which consists of recovering a signal from a collection of noisy measurements. Rather than attempt to encode prior knowledge via an analytic regularizer, we leverage available data to project signals directly onto the (possibly nonlinear) manifold of true data (i.e., regularize via an indicator function of the manifold). Our approach avoids the difficult task of forming a direct representation of the manifold. Instead, we directly learn the projection operator by solving a sequence of unsupervised learning problems, and we prove our method converges in probability to the desired projection. This operator can then be directly incorporated into optimization algorithms in the same manner as Plug-and-Play methods, but now with robust theoretical guarantees. Numerical examples are provided.

4 citations

Book ChapterDOI
01 Jan 2000
TL;DR: Molecular Beam Epitaxy (MBE) as mentioned in this paper is a method for growing atomically thin films of material, where atoms are deposited on a surface, where they hop randomly until attaching at the edges of partially completed atomic monolayers.
Abstract: Molecular Beam Epitaxy is a method for growing atomically thin films of material. During epitaxial growth, atoms are deposited on a surface, where they hop randomly until attaching at the edges of partially completed atomic monolayers. This process has practical application to the fabrication of high speed semiconductor electronic devices.

4 citations

Proceedings ArticleDOI
16 Apr 2015
TL;DR: This work presents a calibration-free parallel magnetic resonance imaging (pMRI) reconstruction approach by exploiting the fact that image structures typically tend to repeat themselves in several locations in the image domain, and proposes an iterative algorithm which is based on a variable splitting strategy.
Abstract: In this work we present a calibration-free parallel magnetic resonance imaging (pMRI) reconstruction approach by exploiting the fact that image structures typically tend to repeat themselves in several locations in the image domain. We use this prior information along with the correlation that exists among the different MR images, which are acquired from multiple receiver coils, to improve reconstructions from under-sampled data with arbitrary k-space trajectories. To accomplish this, we follow a variational approach and cast the pMRI reconstruction problem as the minimization of an energy functional that involves a vectorial non-local total variation (NLTV) regularizer. Further, to solve the posed optimization problem we propose an iterative algorithm which is based on a variable splitting strategy. To assess the reconstruction quality of the proposed method, we provide comparisons with alternative techniques and show that our results can be very competitive.

4 citations

Journal ArticleDOI
TL;DR: A level set based surface capturing algorithm to first capture the aneurysms from the vascular tree is presented and applications to medical images are presented to show the accuracy, consistency and robustness of the method in capturing brain aneurYSms and volume quantification.
Abstract: Brain aneurysm rupture has been reported to be closely related to aneurysm size. The current method used to determine aneurysm size is to measure the dimension of the aneurysm dome and the width of the aneurysm neck. Since aneurysms usually have complicated shapes, using just the size of the aneurysm dome and neck may not be accurate and may overlook important geometrical information. In this paper we present a level set based surface capturing algorithm to first capture the aneurysms from the vascular tree. Since aneurysms are described by level set functions, volumes, curvatures and other geometric quantities of the aneurysm surface can easily be computed for medical studies. Experiments and comparisons with models used for capturing illusory contours in 2D images are performed. Applications to medical images are also presented to show the accuracy, consistency and robustness of our method in capturing brain aneurysms and volume quantification.

4 citations


Cited by
More filters
Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

01 May 1993
TL;DR: Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems.
Abstract: Three parallel algorithms for classical molecular dynamics are presented. The first assigns each processor a fixed subset of atoms; the second assigns each a fixed subset of inter-atomic forces to compute; the third assigns each a fixed spatial region. The algorithms are suitable for molecular dynamics models which can be difficult to parallelize efficiently—those with short-range forces where the neighbors of each atom change rapidly. They can be implemented on any distributed-memory parallel machine which allows for message-passing of data between independently executing processors. The algorithms are tested on a standard Lennard-Jones benchmark problem for system sizes ranging from 500 to 100,000,000 atoms on several parallel supercomputers--the nCUBE 2, Intel iPSC/860 and Paragon, and Cray T3D. Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems. For large problems, the spatial algorithm achieves parallel efficiencies of 90% and a 1840-node Intel Paragon performs up to 165 faster than a single Cray C9O processor. Trade-offs between the three algorithms and guidelines for adapting them to more complex molecular dynamics simulations are also discussed.

29,323 citations

Book
23 May 2011
TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Abstract: Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.

17,433 citations