scispace - formally typeset
Search or ask a question
Author

Stanley Osher

Bio: Stanley Osher is an academic researcher from University of California, Los Angeles. The author has contributed to research in topics: Level set method & Hyperbolic partial differential equation. The author has an hindex of 114, co-authored 510 publications receiving 104028 citations. Previous affiliations of Stanley Osher include University of Minnesota & University of Innsbruck.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a non-conservative modification of the total energy computed by solving a coupled evolution equation for the pressure was proposed for the thermally perfect Euler equations, which can alleviate non-physical oscillations near some material interfaces.
Abstract: Standard conservative discretizations of the compressible Euler equations have been shown to admit nonphysical oscillations near some material interfaces. For example, the calorically perfect Euler equations admit these oscillations when both temperature and gamma jump across an interface, but not when either temperature or gamma happen to be constant. These nonphysical oscillations can be alleviated to some degree with a nonconservative modification of the total energy computed by solving a coupled evolution equation for the pressure. In this paper, we develop and illustrate this method for the thermally perfect Euler equations.

32 citations

Journal ArticleDOI
17 Apr 2015-ACS Nano
TL;DR: This work determines dipole orientations using efficient new image analysis techniques and finds aligned dipoles to be highly defect tolerant, crossing molecular domain boundaries and substrate step edges.
Abstract: Carboranethiol molecules self-assemble into upright molecular monolayers on Au{111} with aligned dipoles in two dimensions The positions and offsets of each molecule's geometric apex and local dipole moment are measured and correlated with sub-Angstrom precision Juxtaposing simultaneously acquired images, we observe monodirectional offsets between the molecular apexes and dipole extrema We determine dipole orientations using efficient new image analysis techniques and find aligned dipoles to be highly defect tolerant, crossing molecular domain boundaries and substrate step edges The alignment observed, consistent with Monte Carlo simulations, forms through favorable intermolecular dipole-dipole interactions

32 citations

Journal ArticleDOI
01 Jan 2018
TL;DR: A method for solving a large class of non-convex Hamilton-Jacobi partial differential equations (HJ PDE), which yields decoupled subproblems, which can be solved in an embarrassingly parallel fashion.
Abstract: : In this paper, we develop a method for solving a large class of non-convex Hamilton-Jacobi partial differential equations (HJ PDE). The method yields decoupled subproblems, which can be solved in an embarrassingly parallel fashion. The complexity of the resulting algorithm is polynomial in the problem dimension; hence, it overcomes the curse of dimensionality [1, 2]. We extend previous work in[6] and apply the Hopf formula to solve HJ PDE involving non-convex Hamiltonians. We propose an ADMM approach for finding the minimizer associated with the Hopf formula. Some explicit formulae of proximal maps, as well as newly-defined stretch operators, are used in the numerical solutions of ADMM subproblems. Our approach is expected to have wide applications in continuous dynamic games, control theory problems, and elsewhere.

32 citations

Journal ArticleDOI
TL;DR: In this paper, the authors apply the level set method to compute the three dimensional multivalued ge- ometrical optics term in a paraxial formulation, which is obtained from the 3D stationary eikonal equation by using one of the spatial directions as the evolution direction.
Abstract: We apply the level set method to compute the three dimensional multivalued ge- ometrical optics term in a paraxial formulation. The paraxial formulation is obtained from the 3-D stationary eikonal equation by using one of the spatial directions as the artiflcial evolution direction. The advection velocity fleld used to move level sets is obtained by the method of char- acteristics; therefore the motion of level sets is deflned in phase space. The multivalued travel-time and amplitude-related quantity are obtained from solving advection equations with source terms. We derive an amplitude formula in a reduced phase space which is very convenient to use in the level set framework. By using a semi-Lagrangian method in the paraxial formulation, the method has O(N 2 ) rather than O(N 4 ) memory storage requirement for up to O(N 2 ) multiple point sources in the flve dimensional phase space, where N is the number of mesh points along one direction. Although the computational complexity is still O(MN 4 ), where M is the number of steps in the ODE solver for the semi-Lagrangian scheme, this disadvantage is largely overcome by the fact that up to O(N 2 ) multiple point sources can be treated simultaneously. Three dimensional numerical examples demonstrate the e-ciency and accuracy of the method.

32 citations

Posted Content
TL;DR: Algorithms to overcome the curse of dimensionality in possibly non-convex state-dependent Hamilton-Jacobi equations (HJ PDEs) arising from optimal control and differential game problems, and elsewhere are developed.
Abstract: In this paper, we develop algorithms to overcome the curse of dimensionality in possibly non-convex state-dependent Hamilton-Jacobi equations (HJ PDEs) arising from optimal control and differential game problems. The subproblems are independent and can be implemented in an embarrassingly parallel fashion. This is an ideal setup for perfect scaling in parallel computing. The algorithm is proposed to overcome the curse of dimensionality [1, 2] when solving HJ PDE. The major contribution of the paper is to change an optimization problem over a space of curves to an optimization problem of a single vector, which goes beyond [23]. We extend [5, 6, 8], and conjecture a (Lax-type) minimization principle when the Hamiltonian is convex, as well as a (Hopf-type) maximization principle when the Hamiltonian is non-convex. The conjectured Hopf-type maximization principle is a generalization of the well-known Hopf formula [11, 16, 30]. We validated formula under restricted assumptions, and bring our readers to [57] which validates that our conjectures in a more general setting after a previous version of our paper. We conjectured the weakest assumption is a psuedoconvexity assumption similar to [46]. The optimization problems are of the same dimension as that of the HJ PDE. We suggest a coordinate descent method for the minimization procedure in the generalized Lax/Hopf formula, and numerical differentiation is used to compute the derivatives. This method is preferable since the evaluation of the function value itself requires some computational effort, especially when we handle high dimensional optimization problem. The use of multiple initial guesses and a certificate of correctness are suggested to overcome possibly multiple local extrema since the optimization process is no longer convex. Our method is expected to have application in control theory and differential game problems, and elsewhere.

32 citations


Cited by
More filters
Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

01 May 1993
TL;DR: Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems.
Abstract: Three parallel algorithms for classical molecular dynamics are presented. The first assigns each processor a fixed subset of atoms; the second assigns each a fixed subset of inter-atomic forces to compute; the third assigns each a fixed spatial region. The algorithms are suitable for molecular dynamics models which can be difficult to parallelize efficiently—those with short-range forces where the neighbors of each atom change rapidly. They can be implemented on any distributed-memory parallel machine which allows for message-passing of data between independently executing processors. The algorithms are tested on a standard Lennard-Jones benchmark problem for system sizes ranging from 500 to 100,000,000 atoms on several parallel supercomputers--the nCUBE 2, Intel iPSC/860 and Paragon, and Cray T3D. Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems. For large problems, the spatial algorithm achieves parallel efficiencies of 90% and a 1840-node Intel Paragon performs up to 165 faster than a single Cray C9O processor. Trade-offs between the three algorithms and guidelines for adapting them to more complex molecular dynamics simulations are also discussed.

29,323 citations

Book
23 May 2011
TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Abstract: Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.

17,433 citations