scispace - formally typeset
Search or ask a question

Showing papers on "Real image published in 2000"


Journal ArticleDOI
TL;DR: A new robust estimator MLESAC is presented which is a generalization of the RANSAC estimator which adopts the same sampling strategy as RANSac to generate putative solutions, but chooses the solution that maximizes the likelihood rather than just the number of inliers.

2,267 citations


Journal ArticleDOI
TL;DR: A calibration procedure for precise 3D computer vision applications is described that introduces bias correction for circular control points and a nonrecursive method for reversing the distortion model and indicates improvements in the calibration results in limited error conditions.
Abstract: Modern CCD cameras are usually capable of a spatial accuracy greater than 1/50 of the pixel size. However, such accuracy is not easily attained due to various error sources that can affect the image formation process. Current calibration methods typically assume that the observations are unbiased, the only error is the zero-mean independent and identically distributed random noise in the observed image coordinates, and the camera model completely explains the mapping between the 3D coordinates and the image coordinates. In general, these conditions are not met, causing the calibration results to be less accurate than expected. In the paper, a calibration procedure for precise 3D computer vision applications is described. It introduces bias correction for circular control points and a nonrecursive method for reversing the distortion model. The accuracy analysis is presented and the error sources that can reduce the theoretical accuracy are discussed. The tests with synthetic images indicate improvements in the calibration results in limited error conditions. In real images, the suppression of external error sources becomes a prerequisite for successful calibration.

933 citations


Journal ArticleDOI
TL;DR: Presents a stereo algorithm for obtaining disparity maps with occlusion explicitly detected, and presents the processing results from synthetic and real image pairs, including ones with ground-truth values for quantitative comparison with other methods.
Abstract: Presents a stereo algorithm for obtaining disparity maps with occlusion explicitly detected. To produce smooth and detailed disparity maps, two assumptions that were originally proposed by Marr and Poggio (1976, 1979) are adopted: uniqueness and continuity. That is, the disparity maps have a unique value per pixel and are continuous almost everywhere. These assumptions are enforced within a three-dimensional array of match values in disparity space. Each match value corresponds to a pixel in an image and a disparity relative to another image. An iterative algorithm updates the match values by diffusing support among neighboring values and inhibiting others along similar lines of sight. By applying the uniqueness assumption, occluded regions can be explicitly identified. To demonstrate the effectiveness of the algorithm, we present the processing results from synthetic and real image pairs, including ones with ground-truth values for quantitative comparison with other methods.

547 citations


Patent
17 Apr 2000
TL;DR: In this paper, a synthetic image viewed from a virtual viewpoint above a car is created from images captured by cameras for imaging the surroundings of the car, where the area which is not imaged by any of the cameras is displayed as a blind spot.
Abstract: A synthetic image viewed from a virtual viewpoint above a car is created from images captured by cameras for imaging the surroundings of the car. In the synthetic image, an illustrated or real image of the car is displayed in the area where the car is present. The area which is not imaged by any of the cameras is displayed as a blind spot.

475 citations


Journal ArticleDOI
TL;DR: In this article, a multiscale analysis for extracting vessels of different sizes according to the scale of the image is presented. But the method is not suitable for the detection of tubular structures.

388 citations


Journal ArticleDOI
TL;DR: This paper investigates the problem of recovering information about the configuration of an articulated object, such as a human figure, from point correspondences in a single image by considering the foreshortening of the segments of the model in the image.

348 citations


Proceedings ArticleDOI
13 Jun 2000
TL;DR: A method is presented to recover 3D scene structure and camera motion from multiple images without the need for correspondence information by means of an algorithm which iteratively refines a probability distribution over the set of all correspondence assignments.
Abstract: A method is presented to recover 3D scene structure and camera motion from multiple images without the need for correspondence information. The problem is framed as finding the maximum likelihood structure and motion given only the 2D measurements, integrating over all possible assignments of 3D features to 2D measurements. This goal is achieved by means of an algorithm which iteratively refines a probability distribution over the set of all correspondence assignments. At each iteration a new structure from motion problem is solved, using as input a set of 'virtual measurements' derived from this probability distribution. The distribution needed can be efficiently obtained by Markov Chain Monte Carlo sampling. The approach is cast within the framework of Expectation-Maximization, which guarantees convergence to a local maximizer of the likelihood. The algorithm works well in practice, as will be demonstrated using results on several real image sequences.

340 citations


Patent
31 Jan 2000
TL;DR: In this paper, a gray scale liquid crystal display is included with the display system providing for adjustment of the size, shape and/or transparency of the obstruction of real images to enhance viewing of selected virtual reality images while providing for viewing or real images or virtual images combined with real images in other viewing areas.
Abstract: A virtual reality system ( 200 - 322 ) stereoscopically projects a virtual reality images including a three dimensional image ( 245 ) having an interface image ( 250 ′) in a space observable by a user ( 100 ). The display system includes a substantially transparent display means ( 200 ) which also allows real images of real objects ( 850 ) to be combined or superimposed with the virtual reality images. Selective areas or characteristics of the real images are obstructed by a selective real image obstructer ( 860 ) to enhance viewing of selected virtual reality images while providing for viewing or real images or virtual images combined with real images in other viewing areas. The display system includes either a stereoscopic headset display system or a heads-up display system. The selective real images obstructer is a gray scale liquid crystal display included with the display system providing for adjustment of the size, shape and/or transparency of the obstruction of real images. The obstruction of real images may be adjusted in response to information for generating the virtual image, manual inputs or processing of real images by video cameras ( 310 ′ and 320 ′). Other selective real image obstructions include filtering a portion of the spectrum of visible light associated with the real images.

304 citations


Proceedings ArticleDOI
13 Jun 2000
TL;DR: A fast, multiscale algorithm for image segmentation that uses modern numeric techniques to find an approximate solution to normalized cut measures in time that is linear in the size of the image with only a few dozen operations per pixel.
Abstract: We introduce a fast, multiscale algorithm for image segmentation. Our algorithm uses modern numeric techniques to find an approximate solution to normalized cut measures in time that is linear in the size of the image with only a few dozen operations per pixel. In just one pass the algorithm provides a complete hierarchical decomposition of the image into segments. The algorithm detects the segments by applying a process of recursive coarsening in which the same minimization problem is represented with fewer and fewer variables producing an irregular pyramid. During this coarsening process we may compute additional internal statistics of the emerging segments and use these statistics to facilitate the segmentation process. Once the pyramid is completed it is scanned from the top down to associate pixels close to the boundaries of segments with the appropriate segment. The algorithm is inspired by algebraic multigrid (AMG) solvers of minimization problems of heat or electric networks. We demonstrate the algorithm by applying it to real images.

301 citations


Journal ArticleDOI
TL;DR: A simple method for recovering the distortion parameters without the use of any calibration objects is proposed and real-time high resolution panoramas are created using this technique.
Abstract: Images taken with wide-angle cameras tend to have severe distortions which pull points towards the optical center. This paper proposes a simple method for recovering the distortion parameters without the use of any calibration objects. Since distortions cause straight lines in the scene to appear as curves in the image, our algorithm seeks to find the distortion parameters that map the image curves to straight lines. The user selects a small set of points along the image curves. Recovery of the distortion parameters is formulated as the minimization of an objective function which is designed to explicitly account for noise in the selected image points. Experimental results are presented for synthetic data as well as real images. We also present the idea of a polycamera which is defined as a tightly packed camera cluster. Possible configurations are proposed to capture very large fields of view. Such camera clusters tend to have a nonsingle viewpoint. We therefore provide analysis of what we call the minimum working distance for such clusters. Finally, we present results for a polycamera consisting of four wide-angle sensors having a minimum working distance of about 4 m. On undistorting the acquired images using our proposed technique, we create real-time high resolution panoramas.

230 citations


Book ChapterDOI
26 Jun 2000
TL;DR: A novel variational method for image segmentation that unifies boundary and region-based information sources under the Geodesic Active Region framework and a multi-scale approach is considered to reduce the required computational cost and the risk of convergence to local minima.
Abstract: This paper presents a novel variational method for image segmentation that unifies boundary and region-based information sources under the Geodesic Active Region framework. A statistical analysis based on the Minimum Description Length criterion and the Maximum Likelihood Principle for the observed density function (image histogram) using a mixture of Gaussian elements, indicates the number of the different regions and their intensity properties. Then, the boundary information is determined using a probabilistic edge detector, while the region information is estimated using the Gaussian components of the mixture model. The defined objective function is minimized using a gradient-descent method where a level set approach is used to implement the resulting PDE system. According to the motion equations, the set of initial curves is propagated toward the segmentation result under the influence of boundary and region-based segmentation forces, and being constrained by a regularity force. The changes of topology are naturally handled thanks to the level set implementation, while a coupled multi-phase propagation is adopted that increases the robustness and the convergence rate by imposing the idea of mutually exclusive propagating curves. Finally, to reduce the required computational cost and the risk of convergence to local minima, a multi-scale approach is also considered. The performance of our method is demonstrated on a variety of real images.

Journal ArticleDOI
01 Apr 2000
TL;DR: In this paper, a spatial fuzzy clustering algorithm that exploits the spatial contextual information in image data is presented, which is adaptive to the image content in the sense that influence from the neighbouring pixels is suppressed in nonhomogeneous regions in the image.
Abstract: The authors present a spatial fuzzy clustering algorithm that exploits the spatial contextual information in image data. The objective functional of their method utilises a new dissimilarity index that takes into account the influence of the neighbouring pixels on the centre pixel in a 3/spl times/1 window. The algorithm is adaptive to the image content in the sense that influence from the neighbouring pixels is suppressed in nonhomogeneous regions in the image. A cluster merging scheme that merges two clusters based on their closeness and their degree of overlap is presented. Through this merging scheme, an 'optimal' number of clusters can be determined automatically as iteration proceeds. Experimental results with synthetic and real images indicate that the proposed algorithm is more tolerant to noise, better at resolving classification ambiguity and coping with different cluster shape and size than the conventional fuzzy c-means algorithm.

Journal ArticleDOI
TL;DR: The aim of this work is to make it possible walkthrough and augment reality in a 3D model reconstructed from a single image to calibrate a camera and to recover the geometry and the photometry (textures) of objects from asingle image.
Abstract: In this paper, we show how to calibrate a camera and to recover the geometry and the photometry (textures) of objects from a single image. The aim of this work is to make it possible walkthrough and augment reality in a 3D model reconstructed from a single image. The calibration step does not need any calibration target and makes only four assumptions: (1) the single image contains at least two vanishing points, (2) the length (in 3D space) of one line segment (for determining the translation vector) in the image is known, (3) the principle point is the center of the image, and (4) the aspect ratio is fixed by the user. Each vanishing point is determined from a set of parallel lines. These vanishing points help determine a 3D world coordinate system R o. After having computed the focal length, the rotation matrix and the translation vector are evaluated in turn for describing the rigid motion between R o and the camera coordinate system R c. Next, the reconstruction step consists in placing, rotating, scaling, and translating a rectangular 3D box that must fit at best with the potential objects within the scene as seen through the single image. With each face of a rectangular box, a texture that may contain holes due to invisible parts of certain objects is assigned. We show how the textures are extracted and how these holes are located and filled. Our method has been applied to various real images (pictures scanned from books, photographs) and synthetic images.

Proceedings ArticleDOI
01 Jun 2000
TL;DR: This paper investigates the problem of recovering information about the configuration of an articulated object, such as a human figure, from point correspondences in a single-image by considering the foreshortening of the segments of the model in the image.
Abstract: This paper investigates the problem of recovering information about the configuration of an articulated object, such as a human figure, from point correspondences in a single-image Unlike previous approaches, the proposed reconstruction method does not assume that the imagery was acquired with a calibrated camera An analysis is presented which demonstrates that there are a family of solutions to this reconstruction problem parameterized by a single variable A simple and effective algorithm is proposed for recovering the entire set of solutions by considering the foreshortening of the segments of the model in the image Results obtained by applying this algorithm to real images are presented

Proceedings ArticleDOI
03 Sep 2000
TL;DR: Two estimators suitable for the enhancement of text images are proposed: a maximum a posteriori (MAP) estimator based on a Huber prior and an estimator regularized using the total variation norm, which demonstrates the improved noise robustness of these approaches over the Irani and Peleg estimator.
Abstract: The objective of this work is the super-resolution enhancement of image sequences. We consider in particular images of scenes for which the point-to-point image transformation is a plane projective transformation. We first describe the imaging model, and a maximum likelihood (ML) estimator of the super-resolution image. We demonstrate the extreme noise sensitivity of the unconstrained ML estimator. We show that the Irani and Peleg (1991, 1993) super-resolution algorithm does not suffer from this sensitivity, and explain that this stability is due to the error back-projection method which effectively constrains the solution. We then propose two estimators suitable for the enhancement of text images: a maximum a posteriori (MAP) estimator based on a Huber prior and an estimator regularized using the total variation norm. We demonstrate the improved noise robustness of these approaches over the Irani and Peleg estimator. We also show the effects of a poorly estimated point spread function (PSF) on the super-resolution result and explain conditions necessary for this parameter to be included in the optimization. Results are evaluated on both real and synthetic sequences of text images. In the case of the real images, the projective transformations relating the images are estimated automatically from the image data, so that the entire algorithm is automatic.

Proceedings ArticleDOI
15 Jun 2000
TL;DR: A factorization-based method for the estimation of relative pose between planes and cameras is proposed, for the general case of n planes seen in m views, and a mechanism for computing missing data, i.e. when one or several of the planes are not visible in one orSeveral of the images, is described.
Abstract: We present several methods for the estimation of relative pose between planes and cameras, based on projections of sets of coplanar features in images. While such methods exist for simple cases, especially one plane seen in one or several views, the aim of this paper is to propose solutions for multi-plane multi-view situations, possibly with little overlap. We propose a factorization-based method for the general case of n planes seen in m views. A mechanism for computing missing data, i.e. when one or several of the planes are not visible in one or several of the images, is described. Experimental results for real images are shown.

Journal ArticleDOI
TL;DR: It is demonstrated that the idea of grouping together features that satisfy a geometric relationship can be used, both for (automatic) detection and estimation of vanishing points and lines.

Proceedings ArticleDOI
13 Jun 2000
TL;DR: By folding spatial and temporal cues into a single alignment framework, situations which are inherently ambiguous for traditional image- to-image alignment methods, are often uniquely resolved by sequence-to-sequence alignment.
Abstract: The paper presents an approach for establishing correspondences in time and in space between two different video sequences of the same dynamic scene, recorded by stationary uncalibrated video cameras. The method simultaneously estimates both spatial alignment as well as temporal synchronization (temporal alignment) between the two sequences, using all available spatio-temporal information. Temporal variations between image frames (such as moving objects or changes in scene illumination) are powerful cues for alignment, which cannot be exploited by standard image-to-image alignment techniques. We show that by folding spatial and temporal cues into a single alignment framework, situations which are inherently ambiguous for traditional image-to-image alignment methods, are often uniquely resolved by sequence-to-sequence alignment. We also present a "direct" method for sequence-to-sequence alignment. The algorithm simultaneously estimates spatial and temporal alignment parameters directly from measurable sequence quantities, without requiring prior estimation of point correspondences, frame correspondences, or moving object detection. Results are shown on real image sequences taken by multiple video cameras.

Proceedings ArticleDOI
13 Jun 2000
TL;DR: Zebra-crossings are detected by looking for groups of concurrent lines, edges are then partitioned using intensity variation information and three methods are developed to estimate the pose: homography search approach using an a priori model; finding normal using the vanishing line computed from equally-spaced lines and with two vanishing points.
Abstract: Zebra-crossings are useful road features for outdoor navigation in mobility aids for the partially sighted. In this paper, zebra-crossings are detected by looking for groups of concurrent lines, edges are then partitioned using intensity variation information. In order to tackle the ambiguity of the detection algorithm in distinguishing zebra-crossings and stair-cases, pose information is sought. Three methods are developed to estimate the pose: homography search approach using an a priori model; finding normal using the vanishing line computed from equally-spaced lines and with two vanishing points. These algorithms have been applied to real images with promising results and they are also useful in some other shape from texture applications.

Journal ArticleDOI
TL;DR: It is proved that a 2D camera undergoing planar motion reduces to a 1D camera, and a new method for self-calibrating a2D camera using planar motions is deduced.
Abstract: We introduce the concept of self-calibration of a 1D projective camera from point correspondences, and describe a method for uniquely determining the two internal parameters of a 1D camera, based on the trifocal tensor of three 1D images. The method requires the estimation of the trifocal tensor which can be achieved linearly with no approximation unlike the trifocal tensor of 2D images and solving for the roots of a cubic polynomial in one variable. Interestingly enough, we prove that a 2D camera undergoing planar motion reduces to a 1D camera. From this observation, we deduce a new method for self-calibrating a 2D camera using planar motions. Both the self-calibration method for a 1D camera and its applications for 2D camera calibration are demonstrated on real image sequences.

Proceedings ArticleDOI
13 Jun 2000
TL;DR: This work proposes a unified geometrical representation of the static scene and the moving objects that enables the embedding of the motion constraints into the scene structure, which leads to a factorization-based algorithm for reconstructing a scene containing multiple moving objects.
Abstract: We describe an algorithm for reconstructing a scene containing multiple moving objects. Given a monocular image sequence, we recover the scene structure, the trajectories of the moving objects and the camera motion simultaneously. The number of the moving objects is automatically detected without prior motion segmentation. Assuming that the objects are moving linearly with constant speeds, we propose a unified geometrical representation of the static scene and the moving objects. This representation enables the embedding of the motion constraints into the scene structure, which leads to a factorization-based algorithm. Experimental results on synthetic and real images are presented.

Journal ArticleDOI
TL;DR: A nonlinear PDE-based model which unifies the popular model of Alvarez, Lions and Morel (ALM) for image denoising and the Caselles, Kimmel and Sapiro model of geodesic “snakes” and demonstrates the smoothing and segmentation results on several real images.
Abstract: Image denoising and segmentation are fundamental problems in the field of image processing and computer vision with numerous applications. In this paper, we present a nonlinear PDE-based model for image denoising and segmentation which unifies the popular model of Alvarez, Lions and Morel (ALM) for image denoising and the Caselles, Kimmel and Sapiro model of geodesic “snakes”. Our model includes nonlinear diffusive as well as reactive terms and leads to quality denoising and segmentation results as depicted in the experiments presented here. We present a proof for the existence, uniqueness, and stability of the viscosity solution of this PDE-based model. The proof is in spirit similar to the proof of the ALM model; however, there are several differences which arise due to the presence of the reactive terms that require careful treatment/consideration. A fast implementation of our model is realized by embedding the model in a scale space and then achieving the solution via a dynamic system governed by a coupled system of first-order differential equations. The dynamic system finds the solution at a coarse scale and tracks it continuously to a desired fine scale. We demonstrate the smoothing and segmentation results on several real images.

Book ChapterDOI
26 Jun 2000
TL;DR: This work proposes a hybrid method which embodies the benefits of both gamut mapping and illumination change, and generally performs better than either, and verifies that the new method for choosing the solution offers significant improvement, both in the case of synthetic data and with real images.
Abstract: In his paper we introduce two improvements to the three-dimensional gamut mapping approach to computational colour constancy. This approach consist of two separate parts. First the possible solutions are constrained. This part is dependent on the diagonal model of illumination change, which in turn, is a function of the camera sensors. In this work we propose a robust method for relaxing this reliance on the diagonal model. The second part of the gamut mapping paradigm is to choose a solution from the feasible set. Currently there are two general approaches for doing so. We propose a hybrid method which embodies the benefits of both, and generally performs better than either. We provide results using both generated data and a carefully calibrated set of 321 images. In the case of the modification for diagonal model failure, we provide synthetic results using two cameras with a distinctly different degree of support for the diagonal model. Here we verify that the new method does indeed reduce error due to the diagonal model. We also verify that the new method for choosing the solution offers significant improvement, both in the case of synthetic data and with real images.

Patent
Nobutatsu Nakamura1
21 Aug 2000
TL;DR: In this article, a projection display unit provides an image inputting means to which an original image is inputted, a screen surface obtaining means for obtaining a three dimensional shape of a screen surfaces by calculating an azimuth angle, a tilt angle, and a distance of the screen surface for the projection display units by using the normal line vector of screen surface, an image outputting means for outputting the corrected image as a projecting image.
Abstract: A projection display unit, in which a distortion of an image is corrected even when the image is projected from an arbitrary direction, further which can correct a distortion caused by that an image is projected on a screen surface having an irregular surface or a free surface, is provided. The projection display unit provides an image inputting means to which an original image is inputted, a screen surface obtaining means for obtaining a three dimensional shape of a screen surface by calculating an azimuth angle, a tilt angle, and a distance of the screen surface for the projection display unit by using the normal line vector of the screen surface, an image correcting means for executing an inclination correction and a zooming in/out correction for the original image corresponding to the three dimensional shape of the screen surface, and an image outputting means for outputting the corrected image as a projecting image.

Book ChapterDOI
26 Jun 2000
TL;DR: It is shown that it is possible to calibrate a camera using just a flat, textureless Lambertian surface and constant illumination, using the effects of off-axis illumination and vignetting, which result in reduction of light into the camera at off- axis angles.
Abstract: In this paper, we show that it is possible to calibrate a camera using just a flat, textureless Lambertian surface and constant illumination. This is done using the effects of off-axis illumination and vignetting, which result in reduction of light into the camera at off-axis angles. We use these imperfections to our advantage. The intrinsic parameters that we consider are the focal length, principal point, aspect ratio, and skew. We also consider the effect of the tilt of the camera. Preliminary results from simulated and real experiments show that the focal length can be recovered relatively robustly under certain conditions

Journal ArticleDOI
TL;DR: This work suggests efficient implementations for three nonnegatively constrained restorations schemes: constrained least squares, maximum likelihood and maximum entropy, and shows that with a certain parameterization, and using a Quasi-Newton scheme, these methods are very similar.

Journal ArticleDOI
TL;DR: A new dichotomization technique is proposed for multilevel thresholding based on selection of the consistent peak location of the correlation function as threshold value over the interested histogram region that gives consistent results in the sense of human perception and gives satisfactory results to find uniform regions in the image plane.

Patent
30 May 2000
TL;DR: In this paper, a virtual reality system (860-862) stereoscopically projects virtual reality images including a three dimensional image (960, 962) having an interface image (962) in a space observable by a user.
Abstract: A virtual reality system (860-862) stereoscopically projects virtual reality images including a three dimensional image (960, 962) having an interface image (962) in a space observable by a user (100). The display system includes a substantially transparent display (862) which also allows real images of real objects (950) to be combined or superimposed with the virtual reality images. Selective areas or characteristics of the real images are obstructed (893, 893′) by a selective real image obstructer (860) to enhance viewing of selected virtual reality images while providing for viewing or real images or virtual images combined with real images in other viewing areas. Icons (FIG. 21) are displayed opposite the real image obstructions in order that a second person can ascertain the nature of the information viewed by the viewer.

Proceedings ArticleDOI
11 Jun 2000
TL;DR: The authors present a Gaussian window scheme, where the local statistics (here the sum of local correlation coefficients) are weighted with Gaussian kernels, and show that the criterion can be deducted easily to obtain forces to guide the registration.
Abstract: Non-rigid registration of medical images is usually presented as a physical model driven by forces deriving from a measure of similarity of the images. These forces can be computed using a gradient-descent scheme for simple intensity-based similarity measures. However, for more complex similarity measures, using for instance local statistics, the forces are usually found using a block matching scheme. Here, the authors introduce a Gaussian window scheme, where the local statistics (here the sum of local correlation coefficients) are weighted with Gaussian kernels. The authors show that the criterion can be deducted easily to obtain forces to guide the registration. Moreover, these forces can be computed very efficiently by global convolutions inside the real image of the Gaussian window in a time independent of the size of the Gaussian window. The authors also present two minimization strategies by gradient descent to optimize the similarity measure: a linear search and a Gauss-Newton-like scheme. Experiments on synthetic and real 3D data show that the sum of local correlation coefficients optimized using a Gauss-Newton scheme is a fast and accurate method to register images corrupted by a non-uniform bias.

Journal ArticleDOI
TL;DR: It is shown that weight adaptation plays the roles of noise removal and feature preservation and the scheme is insensitive to termination time and the resulting dynamic weights in a wide range of iterations lead to the same segmentation results.
Abstract: We propose a method for image segmentation based on a neural oscillator network. Unlike previous methods, weight adaptation is adopted during segmentation to remove noise and preserve significant discontinuities in an image. Moreover, a logarithmic grouping rule is proposed to facilitate grouping of oscillators representing pixels with coherent properties. We show that weight adaptation plays the roles of noise removal and feature preservation. In particular, our weight adaptation scheme is insensitive to termination time and the resulting dynamic weights in a wide range of iterations lead to the same segmentation results. A computer algorithm derived from oscillatory dynamics is applied to synthetic and real images, and simulation results show that the algorithm yields favorable segmentation results in comparison with other recent algorithms. In addition, the weight adaptation scheme can be directly transformed to a novel feature-preserving smoothing procedure. We also demonstrate that our nonlinear smoothing algorithm achieves good results for various kinds of images.