scispace - formally typeset
Search or ask a question
Author

Stanley Osher

Bio: Stanley Osher is an academic researcher from University of California, Los Angeles. The author has contributed to research in topics: Level set method & Hyperbolic partial differential equation. The author has an hindex of 114, co-authored 510 publications receiving 104028 citations. Previous affiliations of Stanley Osher include University of Minnesota & University of Innsbruck.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a graph-based nonlocal total variation method is proposed for unsupervised classification of hyperspectral images (HSI), where the variational problem is solved by the primal-dual hybrid gradient algorithm.
Abstract: In this paper, a graph-based nonlocal total variation method is proposed for unsupervised classification of hyperspectral images (HSI). The variational problem is solved by the primal-dual hybrid gradient algorithm. By squaring the labeling function and using a stable simplex clustering routine, an unsupervised clustering method with random initialization can be implemented. The effectiveness of this proposed algorithm is illustrated on both synthetic and real-world HSI, and numerical results show that the proposed algorithm outperforms other standard unsupervised clustering methods, such as spherical $K$ -means, nonnegative matrix factorization, and the graph-based Merriman–Bence–Osher scheme.

38 citations

Journal ArticleDOI
TL;DR: An algorithm which successively adds new measurements at specially chosen locations is introduced by comparing the solutions of the inverse problem obtained from different number of measurements to improve the reconstruction of the sparse initial data.
Abstract: We consider the inverse problem of finding sparse initial data from the sparsely sampled solutions of the heat equation. The initial data are assumed to be a sum of an unknown but finite number of Dirac delta functions at unknown locations. Point-wise values of the heat solution at only a few locations are used in an $l_1$ constrained optimization to find the initial data. A concept of domain of effective sensing is introduced to speed up the already fast Bregman iterative algorithm for $l_1$ optimization. Furthermore, an algorithm which successively adds new measurements at specially chosen locations is introduced. By comparing the solutions of the inverse problem obtained from different number of measurements, the algorithm decides where to add new measurements in order to improve the reconstruction of the sparse initial data.

37 citations

Journal ArticleDOI
TL;DR: The effectiveness of this proposed algorithm is illustrated on both synthetic and real-world HSI, and numerical results show that the proposed algorithm outperforms other standard unsupervised clustering methods, such as spherical , nonnegative matrix factorization, and the graph-based Merriman–Bence–Osher scheme.
Abstract: In this paper, a graph-based nonlocal total variation method (NLTV) is proposed for unsupervised classification of hyperspectral images (HSI). The variational problem is solved by the primal-dual hybrid gradient (PDHG) algorithm. By squaring the labeling function and using a stable simplex clustering routine, an unsupervised clustering method with random initialization can be implemented. The effectiveness of this proposed algorithm is illustrated on both synthetic and real-world HSI, and numerical results show that the proposed algorithm outperforms other standard unsupervised clustering methods such as spherical K-means, nonnegative matrix factorization (NMF), and the graph-based Merriman-Bence-Osher (MBO) scheme.

37 citations

Book ChapterDOI
01 Jan 2003
TL;DR: In this paper, the authors define signed distance functions to be positive on the exterior, negative on the interior, and zero on the boundary, and an extra condition of |∇φ(x↦)| = 1 is imposed on a signed distance function.
Abstract: In the last chapter we defined implicit functions with φ(x↦) ≤ 0 in the interior region Ω-, φ((x↦) > 0 in the exterior region Ω+, and φ((x↦) = 0 on the boundary ∂Ω. Little was said about φ otherwise, except that smoothness is a desirable property especially in sampling the function or using numerical approximations. In this chapter we discuss signed distance functions, which are a subset of the implicit functions defined in the last chapter. We define signed distance functions to be positive on the exterior, negative on the interior, and zero on the boundary. An extra condition of |∇φ(x↦)| = 1 is imposed on a signed distance function.

37 citations

Journal ArticleDOI
TL;DR: The goal of PTA in tissue engineering is not to fabricate the final transplantable tissue but rather to guide the dynamic organization, maturation, and remodeling leading to the formation of normal and functional tissues.
Abstract: Natural tissues are composed of functionally diverse cell types that are organized in spatially complex arrangements. Organogenesis of complex tissues requires a coordinated sequential transformation process, with individual stages involving time-dependent expression of cell-cell, cell-matrix, and cell-signal interactions in three dimensions. The common theme of temporal-spatial patterning of these cellular interactions is also observed in other physiological processes, such as growth and development, wound healing, and tumor migration. The "precursor tissue analog" (PTA) applies the temporal-spatial patterning theme to tissue engineering. The goal of PTA in tissue engineering is not to fabricate the final transplantable tissue but rather to guide the dynamic organization, maturation, and remodeling leading to the formation of normal and functional tissues. We describe the critical design principles of PTA. First, structural, mechanical, and physiological requirements of the PTA as a temporary scaffold must be met by a fabrication method with flexibility. The fabrication potential incorporating biological materials such as living cells and plasmid DNA has been addressed. Second, the PTA concept is considered suitable for future tissue engineering in light of the use of undifferentiated stem cells, and may possess a capability to guide stem cells toward diverse differentiation characteristics in situ. To this end, the behavior of the engineered cell and tissue must be monitored in detail. The development of a practical phenotype monitoring system such as a DNA microarray may be integral to the fabrication strategies of PTA. Third, the microtopographical and microenvironmental control on the liquid-solid interaction may lead to a critical design for PTA to provide soluble factors, nutrients, and gases to the cells embedded within the scaffold. We suggest that the level set numerical simulation method may be utilized to engineer the consistent circulation of bioactive liquid throughout the PTA microenvironment.

37 citations


Cited by
More filters
Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

01 May 1993
TL;DR: Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems.
Abstract: Three parallel algorithms for classical molecular dynamics are presented. The first assigns each processor a fixed subset of atoms; the second assigns each a fixed subset of inter-atomic forces to compute; the third assigns each a fixed spatial region. The algorithms are suitable for molecular dynamics models which can be difficult to parallelize efficiently—those with short-range forces where the neighbors of each atom change rapidly. They can be implemented on any distributed-memory parallel machine which allows for message-passing of data between independently executing processors. The algorithms are tested on a standard Lennard-Jones benchmark problem for system sizes ranging from 500 to 100,000,000 atoms on several parallel supercomputers--the nCUBE 2, Intel iPSC/860 and Paragon, and Cray T3D. Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems. For large problems, the spatial algorithm achieves parallel efficiencies of 90% and a 1840-node Intel Paragon performs up to 165 faster than a single Cray C9O processor. Trade-offs between the three algorithms and guidelines for adapting them to more complex molecular dynamics simulations are also discussed.

29,323 citations

Book
23 May 2011
TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Abstract: Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.

17,433 citations