scispace - formally typeset

Topic

Real image

About: Real image is a(n) research topic. Over the lifetime, 11765 publication(s) have been published within this topic receiving 255887 citation(s).


Papers
More filters
Journal ArticleDOI
Abstract: A novel scheme for the detection of object boundaries is presented. The technique is based on active contours evolving in time according to intrinsic geometric measures of the image. The evolving contours naturally split and merge, allowing the simultaneous detection of several objects and both interior and exterior boundaries. The proposed approach is based on the relation between active contours and the computation of geodesics or minimal distance curves. The minimal distance curve lays in a Riemannian space whose metric is defined by the image content. This geodesic approach for object segmentation allows to connect classical “snakes” based on energy minimization and geometric active contours based on the theory of curve evolution. Previous models of geometric active contours are improved, allowing stable boundary detection when their gradients suffer from large variations, including gaps. Formal results concerning existence, uniqueness, stability, and correctness of the evolution are presented as well. The scheme was implemented using an efficient algorithm for curve evolution. Experimental results of applying the scheme to real images including objects with holes and medical data imagery demonstrate its power. The results may be extended to 3D object segmentation as well.

4,822 citations

Journal ArticleDOI
TL;DR: A new information-theoretic approach is presented for finding the pose of an object in an image that works well in domains where edge or gradient-magnitude based methods have difficulty, yet it is more robust than traditional correlation.
Abstract: A new information-theoretic approach is presented for finding the pose of an object in an image. The technique does not require information about the surface properties of the object, besides its shape, and is robust with respect to variations of illumination. In our derivation few assumptions are made about the nature of the imaging process. As a result the algorithms are quite general and may foreseeably be used in a wide variety of imaging situations. Experiments are presented that demonstrate the approach registering magnetic resonance (MR) images, aligning a complex 3D object model to real scenes including clutter and occlusion, tracking a human head in a video sequence and aligning a view-based 2D object model to real images. The method is based on a formulation of the mutual information between the model and the image. As applied here the technique is intensity-based, rather than feature-based. It works well in domains where edge or gradient-magnitude based methods have difficulty, yet it is more robust than traditional correlation. Additionally, it has an efficient implementation that is based on stochastic approximation.

3,432 citations

Journal ArticleDOI
TL;DR: This paper shows how to arrange physical lighting so that the acquired images of each object can be directly used as the basis vectors of a low-dimensional linear space and that this subspace is close to those acquired by the other methods.
Abstract: Previous work has demonstrated that the image variation of many objects (human faces in particular) under variable lighting can be effectively modeled by low-dimensional linear spaces, even when there are multiple light sources and shadowing. Basis images spanning this space are usually obtained in one of three ways: a large set of images of the object under different lighting conditions is acquired, and principal component analysis (PCA) is used to estimate a subspace. Alternatively, synthetic images are rendered from a 3D model (perhaps reconstructed from images) under point sources and, again, PCA is used to estimate a subspace. Finally, images rendered from a 3D model under diffuse lighting based on spherical harmonics are directly used as basis images. In this paper, we show how to arrange physical lighting so that the acquired images of each object can be directly used as the basis vectors of a low-dimensional linear space and that this subspace is close to those acquired by the other methods. More specifically, there exist configurations of k point light source directions, with k typically ranging from 5 to 9, such that, by taking k images of an object under these single sources, the resulting subspace is an effective representation for recognition under a wide range of lighting conditions. Since the subspace is generated directly from real images, potentially complex and/or brittle intermediate steps such as 3D reconstruction can be completely avoided; nor is it necessary to acquire large numbers of training images or to physically construct complex diffuse (harmonic) light fields. We validate the use of subspaces constructed in this fashion within the context of face recognition.

2,298 citations

Journal ArticleDOI
TL;DR: A new robust estimator MLESAC is presented which is a generalization of the RANSAC estimator which adopts the same sampling strategy as RANSac to generate putative solutions, but chooses the solution that maximizes the likelihood rather than just the number of inliers.
Abstract: A new method is presented for robustly estimating multiple view relations from point correspondences. The method comprises two parts. The first is a new robust estimator MLESAC which is a generalization of the RANSAC estimator. It adopts the same sampling strategy as RANSAC to generate putative solutions, but chooses the solution that maximizes the likelihood rather than just the number of inliers. The second part of the algorithm is a general purpose method for automatically parameterizing these relations, using the output of MLESAC. A difficulty with multiview image relations is that there are often nonlinear constraints between the parameters, making optimization a difficult task. The parameterization method overcomes the difficulty of nonlinear constraints and conducts a constrained optimization. The method is general and its use is illustrated for the estimation of fundamental matrices, image–image homographies, and quadratic transformations. Results are given for both synthetic and real images. It is demonstrated that the method gives results equal or superior to those of previous approaches.

2,021 citations

Journal ArticleDOI
Abstract: There are now a wide Abstract There are now a wide variety of image segmentation techniques, some considered general purpose and some designed for specific classes of images. These techniques can be classified as: measurement space guided spatial clustering, single linkage region growing schemes, hybrid linkage region growing schemes, centroid linkage region growing schemes, spatial clustering schemes, and split-and-merge schemes. In this paper, we define each of the major classes of image segmentation techniques and describe several specific examples of each class of algorithm. We illustrate some of the techniques with examples of segmentations performed on real images.

1,917 citations


Network Information
Related Topics (5)
Image segmentation

79.6K papers, 1.8M citations

94% related
Feature (computer vision)

128.2K papers, 1.7M citations

93% related
Convolutional neural network

74.7K papers, 2M citations

92% related
Feature extraction

111.8K papers, 2.1M citations

92% related
Image processing

229.9K papers, 3.5M citations

92% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20227
2021383
2020545
2019562
2018444
2017413