scispace - formally typeset
Search or ask a question
Author

Stanley Osher

Bio: Stanley Osher is an academic researcher from University of California, Los Angeles. The author has contributed to research in topics: Level set method & Hyperbolic partial differential equation. The author has an hindex of 114, co-authored 510 publications receiving 104028 citations. Previous affiliations of Stanley Osher include University of Minnesota & University of Innsbruck.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, a low-dimensional manifold model (LDMM) was proposed for extremely strong noise attenuation in the presence of strong noise or low signal-to-noise ratio (SINR) situations.
Abstract: We have found that seismic data can be described in a low-dimensional manifold, and then we investigated using a low-dimensional manifold model (LDMM) method for extremely strong noise attenuation The LDMM supposes the dimension of the patch manifold of seismic data should be low In other words, the degree of freedom of the patches should be low Under the linear events assumption on a patch, the patch can be parameterized by the intercept and slope of the event, if the seismic wavelet is identical everywhere The denoising problem is formed as an optimization problem, including a fidelity term and an LDMM regularization term We have tested LDMM on synthetic seismic data with different noise levels LDMM achieves better denoised results in comparison with the Fourier, curvelet and nonlocal mean filtering methods, especially in the presence of strong noise or low signal-to-noise ratio situations We have also tested LDMM on field records, indicating that LDMM is a method for handling relatively

13 citations

Posted Content
26 Nov 2018
TL;DR: It is shown that even an ensemble of two ResNet20 leads to a 5% higher accuracy towards the strongest iterative fast gradient sign attack than the state-of-the-art adversarial defense algorithm.
Abstract: We propose a simple yet powerful ResNet ensemble algorithm which consists of two components: First, we modify the base ResNet by adding variance specified Gaussian noise to the output of each original residual mapping. Second, we average over the production of multiple parallel and jointly trained modified ResNets to get the final prediction. Heuristically, these two simple steps give an approximation to the well-known Feynman-Kac formula for representing the solution of a transport equation with viscosity, or a convection-diffusion equation. This simple ensemble algorithm improves neural nets' generalizability and robustness towards adversarial attack. In particular, for the CIFAR10 benchmark, with the projected gradient descent adversarial training, we show that even an ensemble of two ResNet20 leads to a 5$\%$ higher accuracy towards the strongest iterative fast gradient sign attack than the state-of-the-art adversarial defense algorithm.

13 citations

Journal ArticleDOI
TL;DR: The ENO adaptive tree methods proposed here can leverage the merits from both tree structures and uniform meshes and take advantage of many well-developed ENO numerical methods based on uniform meshes.
Abstract: We develop high order essentially non-oscillatory (ENO) schemes on non-uniform meshes based on generalized binary trees. The idea is to adopt an appropriate data structure which allows to communicate information easily between unstructured data structure and virtual uniform meshes. While the generalized binary trees as an unstructured data structure can store solution information efficiently if combined with a good adaptive strategy, virtual uniform meshes allow us to take advantage of many well-developed ENO numerical methods based on uniform meshes. Therefore, the ENO adaptive tree methods proposed here can leverage the merits from both tree structures and uniform meshes. Numerical examples demonstrate that the new method is efficient and accurate.

12 citations

Journal ArticleDOI
TL;DR: This paper proposes semi-implicit relaxed Douglas-Rachford (sDR), an accelerated iterative method to solve the classical ptychography problem and shows that sDR improves the convergence speed and the reconstruction quality relative to extended pTYchographic iterative engine (ePIE) and regularized ptyChographic iteratives engine (rPIE).
Abstract: Alternating projection based methods, such as ePIE and rPIE, have been used widely in ptychography. However, they only work well if there are adequate measurements (diffraction patterns); in the case of sparse data (i.e. fewer measurements) alternating projection underperforms and might not even converge. In this paper, we propose semi-implicit relaxed Douglas Rachford (sir-DR), an accelerated iterative method, to solve the classical ptychography problem. Using both simulated and experimental data, we show that sir-DR improves the convergence speed and the reconstruction quality relative to ePIE and rPIE. Furthermore, in certain cases when sparsity is high, sir-DR converges while ePIE and rPIE fail. To facilitate others to use the algorithm, we post the Matlab source code of sir-DR on a public website (this http URL). We anticipate that this algorithm can be generally applied to the ptychographic reconstruction of a wide range of samples in the physical and biological sciences.

12 citations

Book ChapterDOI
Bin Dong1, Aichi Chien1, Yu Mao1, Jian Ye1, Stanley Osher1 
06 Sep 2008
TL;DR: A level set based illusory surface algorithm to capture the aneurysms from the vascular tree is presented and applications to clinical image data demonstrating the procedure of accurately capturing a middle cerebral arteryAneurysm are demonstrated.
Abstract: Brain aneurysm rupture has been reported to be directly related to the size of aneurysms The current method used to determine aneurysm size is to manually measure the width of the neck and height of the dome on a computer screen Because aneurysms usually have complicated shapes, using the size of the aneurysm neck and dome may not be accurate and may overlook important geometrical information In this paper we present a level set based illusory surface algorithm to capture the aneurysms from the vascular tree Since the aneurysms are described by level set functions, not only the volume but also the curvature of aneurysms can be computed for medical studies Experiments and comparisons with models used for capturing illusory contours in 2D images are performed This includes applications to clinical image data demonstrating the procedure of accurately capturing a middle cerebral artery aneurysm

12 citations


Cited by
More filters
Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

01 May 1993
TL;DR: Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems.
Abstract: Three parallel algorithms for classical molecular dynamics are presented. The first assigns each processor a fixed subset of atoms; the second assigns each a fixed subset of inter-atomic forces to compute; the third assigns each a fixed spatial region. The algorithms are suitable for molecular dynamics models which can be difficult to parallelize efficiently—those with short-range forces where the neighbors of each atom change rapidly. They can be implemented on any distributed-memory parallel machine which allows for message-passing of data between independently executing processors. The algorithms are tested on a standard Lennard-Jones benchmark problem for system sizes ranging from 500 to 100,000,000 atoms on several parallel supercomputers--the nCUBE 2, Intel iPSC/860 and Paragon, and Cray T3D. Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems. For large problems, the spatial algorithm achieves parallel efficiencies of 90% and a 1840-node Intel Paragon performs up to 165 faster than a single Cray C9O processor. Trade-offs between the three algorithms and guidelines for adapting them to more complex molecular dynamics simulations are also discussed.

29,323 citations

Book
23 May 2011
TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Abstract: Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.

17,433 citations