scispace - formally typeset
Search or ask a question
Author

Stanley Osher

Bio: Stanley Osher is an academic researcher from University of California, Los Angeles. The author has contributed to research in topics: Level set method & Hyperbolic partial differential equation. The author has an hindex of 114, co-authored 510 publications receiving 104028 citations. Previous affiliations of Stanley Osher include University of Minnesota & University of Innsbruck.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the level set method is used to simulate the growth of thin films described by the motion of island boundaries, which involves a continuum in the lateral directions, but retains atomic scale discreteness in the growth direction.

73 citations

Journal ArticleDOI
TL;DR: In this paper, simple inequalities are presented for the viscosity solution of a Hamilton-Jacobi equation in N space dimension when neither the initial data nor the Hamiltonian need be convex (or concave).
Abstract: Simple inequalities are presented for the viscosity solution of a Hamilton–Jacobi equation in N space dimension when neither the initial data nor the Hamiltonian need be convex (or concave). The initial data are uniformly Lipschitz and can be written as the sum of a convex function in a group of variables and a concave function in the remaining variables, therefore including the nonconvex Riemann problem. The inequalities become equalities wherever a “maxmin” equals a “minmax” and thus a representation formula for this problem is then obtained, generalizing the classical Hopf s formulas.

73 citations

Book ChapterDOI
TL;DR: This paper generalizes the iterated refinement method to a time-continuous inverse scale-space formulation, and introduces a relaxation technique using two evolution equations that allow accurate, efficient and straightforward implementation.
Abstract: In this paper we generalize the iterated refinement method, introduced by the authors in [8],to a time-continuous inverse scale-space formulation. The iterated refinement procedure yields a sequence of convex variational problems, evolving toward the noisy image. The inverse scale space method arises as a limit for a penalization parameter tending to zero, while the number of iteration steps tends to infinity. For the limiting flow, similar properties as for the iterated refinement procedure hold. Specifically, when a discrepancy principle is used as the stopping criterion, the error between the reconstruction and the noise-free image decreases until termination, even if only the noisy image is available and a bound on the variance of the noise is known. The inverse flow is computed directly for one-dimensional signals, yielding high quality restorations. In higher spatial dimensions, we introduce a relaxation technique using two evolution equations. These equations allow accurate, efficient and straightforward implementation.

72 citations

Journal ArticleDOI
TL;DR: In this paper, the authors derived Kreiss' sufficient conditions for stability of dissipative hyperbolic systems with constant coefficients as a corollary to a more general result, in particular, the condition of dissipativity is replaced by a weaker condition.
Abstract: Strang has discussed stability of difference equations whose solutions satisfy the special boundary condition u=O on and outside of the boundaries. We shall use Toeplitz matrices to generalize this theory to systems of equations in one space variable with arbitrary homogeneous boundary conditions. The discussion will be confined to problems in the quarter plane with constant coefficients. The results can be easily generalized to variable coefficient and two point boundary value problems, using Kreiss' method [1] and/or Strang's in [2] and [3]. We shall derive Kreiss' sufficient conditions for stability of dissipative hyperbolic systems with constant coefficients as a corollary to a more general result. In particular, the condition of dissipativity is replaced by a weaker condition. We treat the explicit case in the main part of this work and add the implicit case as an appendix in part 7. The main results are stated in XIX and XXVIII. Kreiss' Theorem is derived in XXII. We give nondissipative examples in XXIII and XXIX. We hope to extend this technique to include problems in several space variables in the near future.

71 citations

Journal ArticleDOI
TL;DR: This work has designed a new patch selection method for DDTF seismic data recovery to accelerate the filter bank training process in DDTF, while doing less damage to the recovery quality.
Abstract: Seismic data denoising and interpolation are essential preprocessing steps in any seismic data processing chain. Sparse transforms with a fixed basis are often used in these two steps. Recently, we have developed an adaptive learning method, the data-driven tight frame (DDTF) method, for seismic data denoising and interpolation. With its adaptability to seismic data, the DDTF method achieves high-quality recovery. For 2D seismic data, the DDTF method is much more efficient than traditional dictionary learning methods. But for 3D or 5D seismic data, the DDTF method results in a high computational expense. The motivation behind this work is to accelerate the filter bank training process in DDTF, while doing less damage to the recovery quality. The most frequently used method involves only a randomly selective subset of the training set. However, this random selection method uses no prior information of the data. We have designed a new patch selection method for DDTF seismic data recovery. We suppose...

70 citations


Cited by
More filters
Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

01 May 1993
TL;DR: Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems.
Abstract: Three parallel algorithms for classical molecular dynamics are presented. The first assigns each processor a fixed subset of atoms; the second assigns each a fixed subset of inter-atomic forces to compute; the third assigns each a fixed spatial region. The algorithms are suitable for molecular dynamics models which can be difficult to parallelize efficiently—those with short-range forces where the neighbors of each atom change rapidly. They can be implemented on any distributed-memory parallel machine which allows for message-passing of data between independently executing processors. The algorithms are tested on a standard Lennard-Jones benchmark problem for system sizes ranging from 500 to 100,000,000 atoms on several parallel supercomputers--the nCUBE 2, Intel iPSC/860 and Paragon, and Cray T3D. Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems. For large problems, the spatial algorithm achieves parallel efficiencies of 90% and a 1840-node Intel Paragon performs up to 165 faster than a single Cray C9O processor. Trade-offs between the three algorithms and guidelines for adapting them to more complex molecular dynamics simulations are also discussed.

29,323 citations

Book
23 May 2011
TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Abstract: Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.

17,433 citations