scispace - formally typeset
Search or ask a question
Author

Stanley Osher

Bio: Stanley Osher is an academic researcher from University of California, Los Angeles. The author has contributed to research in topics: Level set method & Hyperbolic partial differential equation. The author has an hindex of 114, co-authored 510 publications receiving 104028 citations. Previous affiliations of Stanley Osher include University of Minnesota & University of Innsbruck.


Papers
More filters
Proceedings ArticleDOI
07 Jun 1982
TL;DR: In this article, an upwind finite difference procedure that is derived by combining the salient features of the theory of conservation laws and the mathematical theory of characteristi cs for hyperbolic systems of equations is presented.
Abstract: The Osher algorithm for solving the Euler equations is an upwind finite difference procedure that is derived by combining the salient features of the theory of conservation laws and the mathematical theory of characteristi cs for hyperbolic systems of equations. A first-order accurate version of the numerical method was derived by Osher circa 1980 for the one-dimensional non-isentropic Euler equations in Cartesian coordinates. In this paper, the extension of the scheme to arbitrary two-dimensional geometries is explained. Results are then presented for several example problems in one and two dimensions. Future work will include extension of the method to second-order accuracy and the development of implicit time differencing for the Osher algorithm.

23 citations

Patent
16 Apr 1993
TL;DR: In this paper, the authors proposed a method and apparatus for enhancing signals such as images, speech, remotely sensed data, medical, tactile, radar and audio, which proceeds by the construction of certain discrete approximations to certain nonlinear time-dependent partial differential equations.
Abstract: A method and apparatus for enhancing signals such as images, speech, remotely sensed data, medical, tactile, radar and audio. It proceeds by the construction of certain discrete approximations to certain nonlinear time-dependent partial differential equations. These approximations preserve the variation of the discrete solution as discrete time increases for one space dimension. The approximate solutions satisfy certain maximum principles in one and two dimensions. Thus, the method enhances images and other general signals without being plagued by the phenomenon of ringing near edges and other features, or by smearing of these edges and other features, typical of the prior art. As discrete time increases the signal is enhanced. The process may reach steady state, but for some applications, the dynamical procedure is important. The method is fast, requiring only local operations on a special purpose computer described herein.

23 citations

Journal ArticleDOI
25 Apr 2016-ACS Nano
TL;DR: It is found that amide-based hydrogen bonds cross molecular domain boundaries and areas of local disorder in buried hydrogen-bonding networks within self-assembled monolayers of 3-mercapto-N-nonylpropionamide.
Abstract: We map buried hydrogen-bonding networks within self-assembled monolayers of 3-mercapto-N-nonylpropionamide on Au{111}. The contributing interactions include the buried S-Au bonds at the substrate surface and the buried plane of linear networks of hydrogen bonds. Both are simultaneously mapped with submolecular resolution, in addition to the exposed interface, to determine the orientations of molecular segments and directional bonding. Two-dimensional mode-decomposition techniques are used to elucidate the directionality of these networks. We find that amide-based hydrogen bonds cross molecular domain boundaries and areas of local disorder.

23 citations

Journal ArticleDOI
TL;DR: In this article, the authors used atomic electron tomography to experimentally determine the three-dimensional atomic positions of monatomic amorphous solids, namely a Ta thin film and two Pd nanoparticles.
Abstract: Liquids and solids are two fundamental states of matter. However, our understanding of their three-dimensional atomic structure is mostly based on physical models. Here we use atomic electron tomography to experimentally determine the three-dimensional atomic positions of monatomic amorphous solids, namely a Ta thin film and two Pd nanoparticles. We observe that pentagonal bipyramids are the most abundant atomic motifs in these amorphous materials. Instead of forming icosahedra, the majority of pentagonal bipyramids arrange into pentagonal bipyramid networks with medium-range order. Molecular dynamics simulations further reveal that pentagonal bipyramid networks are prevalent in monatomic metallic liquids, which rapidly grow in size and form more icosahedra during the quench from the liquid to the glass state. These results expand our understanding of the atomic structures of amorphous solids and will encourage future studies on amorphous–crystalline phase and glass transitions in non-crystalline materials with three-dimensional atomic resolution. Atomic electron tomography is used to determine the three-dimensional atomic structure of monatomic amorphous solids with liquid-like structure, which is characterized by the existence of pentagonal bipyramid networks with medium-range order.

23 citations

Journal ArticleDOI
TL;DR: Numerical experiments show that the proposed method is very competitive and outperforms state-of-the-art denoising methods such as BM3D and mathematically shows the convergence of the algorithms when the proposed model is convex.
Abstract: We propose a denoising method by integrating group sparsity and TV regularization based on self-similarity of the image blocks. By using the block matching technique, we introduce some local SVD operators to get a good sparsity representation for the groups of the image blocks. The sparsity regularization and TV are unified in a variational problem and each of the subproblems can be efficiently optimized by splitting schemes. The proposed algorithm mainly contains the following four steps: block matching, basis vectors updating, sparsity regularization and TV smoothing. The self-similarity information of the image is assembled by the block matching step. By concatenating all columns of the similar image block together, we get redundancy matrices whose column vectors are highly correlated and should have sparse coefficients after a proper transformation. In contrast with many transformation based denoising methods such as BM3D with fixed basis vectors, we update local basis vectors derived from the SVD to enforce the sparsity representation. This step is equivalent to a dictionary learning procedure. With the sparsity regularization step, one can remove the noise efficiently and keep the texture well. The TV regularization step can help us to reduced the artifacts caused by the image block stacking. Besides, we mathematically show the convergence of the algorithms when the proposed model is convex (with $$p=1$$ ) and the bases are fixed. This implies the iteration adopted in BM3D is converged, which was not mathematically shown in the BM3D method. Numerical experiments show that the proposed method is very competitive and outperforms state-of-the-art denoising methods such as BM3D.

22 citations


Cited by
More filters
Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

01 May 1993
TL;DR: Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems.
Abstract: Three parallel algorithms for classical molecular dynamics are presented. The first assigns each processor a fixed subset of atoms; the second assigns each a fixed subset of inter-atomic forces to compute; the third assigns each a fixed spatial region. The algorithms are suitable for molecular dynamics models which can be difficult to parallelize efficiently—those with short-range forces where the neighbors of each atom change rapidly. They can be implemented on any distributed-memory parallel machine which allows for message-passing of data between independently executing processors. The algorithms are tested on a standard Lennard-Jones benchmark problem for system sizes ranging from 500 to 100,000,000 atoms on several parallel supercomputers--the nCUBE 2, Intel iPSC/860 and Paragon, and Cray T3D. Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems. For large problems, the spatial algorithm achieves parallel efficiencies of 90% and a 1840-node Intel Paragon performs up to 165 faster than a single Cray C9O processor. Trade-offs between the three algorithms and guidelines for adapting them to more complex molecular dynamics simulations are also discussed.

29,323 citations

Book
23 May 2011
TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Abstract: Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.

17,433 citations