scispace - formally typeset
Search or ask a question
Author

Stanley Osher

Bio: Stanley Osher is an academic researcher from University of California, Los Angeles. The author has contributed to research in topics: Level set method & Hyperbolic partial differential equation. The author has an hindex of 114, co-authored 510 publications receiving 104028 citations. Previous affiliations of Stanley Osher include University of Minnesota & University of Innsbruck.


Papers
More filters
Posted Content
TL;DR: This paper proposes a hybrid gradient descent to solve the tomography problem by combining Fourier slice theorem and calculus of variations and shows that the state-of-art RESIRE can produce more superior results than previous methods; the reconstructed objects have higher quality and smaller relative errors.
Abstract: Tomography has made a revolutionary impact on diverse fields, ranging from macro-/mesoscopic scale studies in biology, radiology, plasma physics to the characterization of 3D atomic structure in material science. The fundamental of tomography is to reconstruct a 3D object from a set of 2D projections. To solve the tomography problem, many algorithms have been developed. Among them are methods using transformation technique such as computed tomography (CT) based on Radon transform and Generalized Fourier iterative reconstruction (GENFIRE) based on Fourier slice theorem (FST), and direct methods such as Simultaneous Iterative Reconstruction Technique (SIRT) and Simultaneous Algebraic Reconstruction Technique (SART) using gradient descent and algebra technique. In this paper, we propose a hybrid gradient descent to solve the tomography problem by combining Fourier slice theorem and calculus of variations. By using simulated and experimental data, we show that the state-of-art RESIRE can produce more superior results than previous methods; the reconstructed objects have higher quality and smaller relative errors. More importantly, RESIRE can deal with partially blocked projections rigorously where only part of projection information are provided while other methods fail. We anticipate RESIRE will not only improve the reconstruction quality in all existing tomographic applications, but also expand tomography method to a broad class of functional thin films. We expect RESIRE to find a broad applications across diverse disciplines.

3 citations

Proceedings ArticleDOI
01 Mar 2017
TL;DR: Improved accuracy in classification over data-mining techniques like k-means, unmixing techniques like Hierarchical Non-Negative Matrix Factorization, and graph-based methods like Non-Local Total Variation is demonstrated.
Abstract: We propose a semi-supervised algorithm for processing and classification of hyperspectral imagery. For initialization, we keep 20% of the data intact, and use Principal Component Analysis to discard voxels from noisier bands and pixels. Then, we use either an Accelerated Proximal Gradient algorithm (APGL), or a modified APGL algorithm with a penalty term for distance between inpainted pixels and endmembers (APGL Hyp), on the initialized datacube to inpaint the missing data. APGL and APGL Hyp are distinguished by performance on datasets with full pixels removed or extreme noise. This inpainting technique results in band-by-band datacube sharpening and removal of noise from individual spectral signatures. We can also classify the inpainted cube by assigning each pixel to its nearest endmember via Euclidean distance. We demonstrate improved accuracy in classification over data-mining techniques like k-means, unmixing techniques like Hierarchical Non-Negative Matrix Factorization, and graph-based methods like Non-Local Total Variation.

2 citations

Journal ArticleDOI
TL;DR: In this article , the authors give a formula for accurately approximating proximal operators using only (possibly noisy) objective function samples, where the objective functions do not admit explicit formulas for their proximal operator.
Abstract: Significance Many objective functions do not admit explicit formulas for their proximal operators. Moreover, these operators often cannot be estimated using exact gradients (e.g., when objectives are accessible via an oracle). In this work, we give a formula for accurately approximating proximal operators using only (possibly noisy) objective function samples.

2 citations

Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate the rectification properties of tapered-channel thermal diodes relying on asymmetric heat flow brought about by thermal conductivity differences between the liquid and solid phases of suitably selected phase-change materials (PCM).
Abstract: Designing thermal diodes is attracting a considerable amount of interest recently due to the wide range of applications and potentially high impact in the transportation and energy industries. Advances in nanoscale synthesis and characterization are opening new avenues for design using atomic-level tools to take advantage of materials properties in confined volumes. In this paper, we demonstrate using advanced modeling and simulation the rectification properties of tapered-channel thermal diodes relying on asymmetric heat flow brought about by thermal conductivity differences between the liquid and solid phases of suitably selected phase-change materials (PCM). Our prototypical design considers Ga as PCM and anodized alumina as the structural material. First, we use a thresholding scheme to solve a Stefan problem in the device channel to study the interface shape and the hysteresis of the phase transformation when the temperature gradient is switched. We then carry out finite-element simulations to study the effect of several geometric parameters on diode efficiency, such as channel length as aspect ratio. Our analysis establishes physical limits on rectification efficiencies and point to design improvements using several materials to assess the potential of these devices as viable thermal diodes. Finally, we demonstrate the viability of proof-of-concept device fabrication by using a non-conformal atomic layer deposition process in anodic alumina membranes infiltrated with Ga metal.

2 citations

Book ChapterDOI
01 Jan 2003
TL;DR: In this article, the Lagrangian formulation of the interface evolution equation is used to move all the points on the implicit surface with the velocity of each point on the surface given as V↦(V↦).
Abstract: Suppose that the velocity of each point on the implicit surface is given as V↦(V↦); i.e., assume that V↦(V↦) is known for every point (V↦ with φ(V↦) = 0. Given this velocity field V↦ = (u, v, w), we wish to move all the points on the surface with this velocity. The simplest way to do this is to solve the ordinary differential equation (ODE) $$ \frac{{d\overrightarrow x }}{{dt}} = \overrightarrow V \left( {\overrightarrow x } \right) $$ for every point V↦ on the front, i.e., for all V↦ with φ(V↦) = 0. This is the Lagrangian formulation of the interface evolution equation. Since there are generally an infinite number of points on the front (except, of course, in one spatial dimension), this means discretizing the front into a finite number of pieces. For example, one could use segments in two spatial dimensions or triangles in three spatial dimensions and move the endpoints of these segments or triangles. This is not so hard to accomplish if the connectivity does not change and the surface elements are not distorted too much. Unfortunately, even the most trivial velocity fields can cause large distortion of boundary elements (segments or triangles), and the accuracy of the method can deteriorate quickly if one does not periodically modify the discretization in order to account for these deformations by smoothing and regularizing inaccurate surface elements.

2 citations


Cited by
More filters
Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

01 May 1993
TL;DR: Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems.
Abstract: Three parallel algorithms for classical molecular dynamics are presented. The first assigns each processor a fixed subset of atoms; the second assigns each a fixed subset of inter-atomic forces to compute; the third assigns each a fixed spatial region. The algorithms are suitable for molecular dynamics models which can be difficult to parallelize efficiently—those with short-range forces where the neighbors of each atom change rapidly. They can be implemented on any distributed-memory parallel machine which allows for message-passing of data between independently executing processors. The algorithms are tested on a standard Lennard-Jones benchmark problem for system sizes ranging from 500 to 100,000,000 atoms on several parallel supercomputers--the nCUBE 2, Intel iPSC/860 and Paragon, and Cray T3D. Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems. For large problems, the spatial algorithm achieves parallel efficiencies of 90% and a 1840-node Intel Paragon performs up to 165 faster than a single Cray C9O processor. Trade-offs between the three algorithms and guidelines for adapting them to more complex molecular dynamics simulations are also discussed.

29,323 citations

Book
23 May 2011
TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Abstract: Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.

17,433 citations