scispace - formally typeset
Search or ask a question

Showing papers on "Real image published in 2009"


Journal ArticleDOI
TL;DR: Results on real images demonstrate that the proposed adaptation of the nonlocal (NL)-means filter for speckle reduction in ultrasound (US) images is able to preserve accurately edges and structural details of the image.
Abstract: In image processing, restoration is expected to improve the qualitative inspection of the image and the performance of quantitative image analysis techniques. In this paper, an adaptation of the nonlocal (NL)-means filter is proposed for speckle reduction in ultrasound (US) images. Originally developed for additive white Gaussian noise, we propose to use a Bayesian framework to derive a NL-means filter adapted to a relevant ultrasound noise model. Quantitative results on synthetic data show the performances of the proposed method compared to well-established and state-of-the-art methods. Results on real images demonstrate that the proposed method is able to preserve accurately edges and structural details of the image.

547 citations


Proceedings ArticleDOI
20 Jun 2009
TL;DR: Without requiring any prior information of the blur kernel as the input, the proposed approach is able to recover high-quality images from given blurred images and the new sparsity constraints under tight frame systems enable the application of a fast algorithm called linearized Bregman iteration to efficiently solve the proposed minimization problem.
Abstract: Restoring a clear image from a single motion-blurred image due to camera shake has long been a challenging problem in digital imaging. Existing blind deblurring techniques either only remove simple motion blurring, or need user interactions to work on more complex cases. In this paper, we present an approach to remove motion blurring from a single image by formulating the blind blurring as a new joint optimization problem, which simultaneously maximizes the sparsity of the blur kernel and the sparsity of the clear image under certain suitable redundant tight frame systems (curvelet system for kernels and framelet system for images). Without requiring any prior information of the blur kernel as the input, our proposed approach is able to recover high-quality images from given blurred images. Furthermore, the new sparsity constraints under tight frame systems enable the application of a fast algorithm called linearized Bregman iteration to efficiently solve the proposed minimization problem. The experiments on both simulated images and real images showed that our algorithm can effectively removing complex motion blurring from nature images.

285 citations


Proceedings ArticleDOI
16 Apr 2009
TL;DR: This work forms this method in a variational Bayesian framework and performs the reconstruction of both the surface of the scene and the (superresolved) light field.
Abstract: Light field cameras have been recently shown to be very effective in applications such as digital refocusing and 3D reconstruction. In a single snapshot these cameras provide a sample of the light field of a scene by trading off spatial resolution with angular resolution. Current methods produce images at a resolution that is much lower than that of traditional imaging devices. However, by explicitly modeling the image formation process and incorporating priors such as Lambertianity and texture statistics, these types of images can be reconstructed at a higher resolution. We formulate this method in a variational Bayesian framework and perform the reconstruction of both the surface of the scene and the (superresolved) light field. The method is demonstrated on both synthetic and real images captured with our light-field camera prototype.

279 citations


Journal ArticleDOI
TL;DR: This paper investigates the problem of matching line segments automatically only from their neighborhood appearance, without resorting to any other constraints or priori knowledge, and proposes a novel line descriptor called mean-standard deviation line descriptor (MSLD).

275 citations


Journal ArticleDOI
TL;DR: A modified version of the linearized Bregman iteration proposed and analyzed shows that the method is very simple to implement, robust to noise, and effective for image deblurring.
Abstract: Real images usually have sparse approximations under some tight frame systems derived from framelets, an oversampled discrete (window) cosine, or a Fourier transform. In this paper, we propose a method for image deblurring in tight frame domains. It is reduced to finding a sparse solution of a system of linear equations whose coefficient matrix is rectangular. Then, a modified version of the linearized Bregman iteration proposed and analyzed in [J.-F. Cai, S. Osher, and Z. Shen, Math. Comp., to appear, UCLA CAM Report (08-52), 2008; J.-F. Cai, S. Osher, and Z. Shen, Math. Comp., to appear, UCLA CAM Report (08-06), 2008; S. Osher et al., UCLA CAM Report (08-37), 2008; W. Yin et al., SIAM J. Imaging Sci., 1 (2008), pp. 143-168] can be applied. Numerical examples show that the method is very simple to implement, robust to noise, and effective for image deblurring.

196 citations


Patent
13 Feb 2009
TL;DR: In this article, the authors present a method for accessing geometrically located data that represent one or more virtual items with respect to a three-dimensional coordinate system, and generating a 3D map based at least in part on real image data of a threedimensional space as acquired by a camera.
Abstract: An exemplary method includes accessing geometrically located data that represent one or more virtual items with respect to a three-dimensional coordinate system; generating a three-dimensional map based at least in part on real image data of a three-dimensional space as acquired by a camera; rendering to a physical display a mixed reality scene that includes the one or more virtual items at respective three-dimensional positions in a real image of the three-dimensional space acquired by the camera; and re-rendering to the physical display the mixed reality scene upon a change in the field of view of the camera. Other methods, devices, systems, etc., are also disclosed.

196 citations


Proceedings ArticleDOI
17 Apr 2009
TL;DR: It is observed from the experiment that the developed system successfully detects and recognize the vehicle number plate on real images.
Abstract: Automatic Number Plate Recognition (ANPR) is an image processing technology which uses number (license) plate to identify the vehicle. The objective is to design an efficient automatic authorized vehicle identification system by using the vehicle number plate. The system is implemented on the entrance for security control of a highly restricted area like military zones or area around top government offices e.g. Parliament, Supreme Court etc. The developed system first detects the vehicle and then captures the vehicle image. Vehicle number plate region is extracted using the image segmentation in an image. Optical character recognition technique is used for the character recognition. The resulting data is then used to compare with the records on a database so as to come up with the specific information like the vehicle’s owner, place of registration, address, etc. The system is implemented and simulated in Matlab, and it performance is tested on real image. It is observed from the experiment that the developed system successfully detects and recognize the vehicle number plate on real images.

192 citations


Journal ArticleDOI
TL;DR: Quantitative comparisons of the proposed method with existing algorithms on a diverse set of 261 real-world photos to demonstrate significant advances in accuracy and speed over the state of the art in automatic discovery of regularity in real images.
Abstract: We propose a novel and robust computational framework for automatic detection of deformed 2D wallpaper patterns in real-world images. The theory of 2D crystallographic groups provides a sound and natural correspondence between the underlying lattice of a deformed wallpaper pattern and a degree-4 graphical model. We start the discovery process with unsupervised clustering of interest points and voting for consistent lattice unit proposals. The proposed lattice basis vectors and pattern element contribute to the pairwise compatibility and joint compatibility (observation model) functions in a Markov random field (MRF). Thus, we formulate the 2D lattice detection as a spatial, multitarget tracking problem, solved within an MRF framework using a novel and efficient mean-shift belief propagation (MSBP) method. Iterative detection and growth of the deformed lattice are interleaved with regularized thin-plate spline (TPS) warping, which rectifies the current deformed lattice into a regular one to ensure stability of the MRF model in the next round of lattice recovery. We provide quantitative comparisons of our proposed method with existing algorithms on a diverse set of 261 real-world photos to demonstrate significant advances in accuracy and speed over the state of the art in automatic discovery of regularity in real images.

173 citations


Journal ArticleDOI
TL;DR: This article investigates and compiles some of the techniques mostly used in the smoothing or suppression of speckle noise in ultrasound images, and proposes the use of the tendencies observed in the study in real images.
Abstract: This article investigates and compiles some of the techniques mostly used in the smoothing or suppression of speckle noise in ultrasound images. With this information, a comparison of all the methods studied is done based on an experiment, using quality metrics to test their performance and show the benefits each one can contribute. To test the methods, a synthetic, noise-free image of a kidney is created and later simulations using Field II program to corrupt it are performed. This way, the smoothing techniques can be compared using numeric metrics, taking the noise-free image as a reference. Since real ultrasound images are already noise corrupted images and real noise-free images do not exist, conventional metrics cannot be used to indicate the quality obtained with filtering. Nevertheless, we propose the use of the tendencies observed in our study in real images.

138 citations


Book ChapterDOI
01 Jan 2009
TL;DR: This chapter describes the methods for restoration and identification of image and the improvement in signal-to-noise ratio is basically a measure that expresses the reduction of disagreement with the ideal image when comparing the distorted and restored image.
Abstract: Publisher Summary This chapter describes the methods for restoration and identification of image. Many types of motion blur can be distinguished all of which are due to relative motion between the recording device and the scene. This can be in the form of a translation, rotation, sudden change of scale, or some combination of these. The improvement in signal-to-noise ratio is basically a measure that expresses the reduction of disagreement with the ideal image when comparing the distorted and restored image. When applying restoration filters to real images of which the ideal image is not available, often only the visual judgment of the restored image can be relied upon. For this reason it is desirable for a restoration filter to be somewhat tunable to the liking of the user. In many practical cases the actual restoration process has to be preceded by the identification of the point spread function (PSF). A more common situation is that the blur is estimated from the observed image itself. The blur identification procedure starts out by choosing a parametric model for the PSF. Then, the parametric blur model describes the PSF as a (small) set of coefficients within a given finite support. Within this support, the value of the PSF coefficients needs to be estimated.

136 citations


Journal ArticleDOI
TL;DR: This work has created a toolbox that can generate 3D digital phantoms of specific cellular components along with their corresponding images degraded by specific optics and electronics, and evaluated the plausibility of the synthetic images, measured by their similarity to real image data.
Abstract: Image cytometry still faces the problem of the quality of cell image analysis results. Degradations caused by cell preparation, optics and electronics considerably affect most 2D and 3D cell image data acquired using optical microscopy. That is why image processing algorithms applied to these data typically offer imprecise and unreliable results. We have created a toolbox that can generate 3D digital phantoms of specific cellular components along with their corresponding images degraded by specific optics and electronics. The user can then apply image analysis methods to such simulated image data. The analysis results can be compared with ground truth derived from input object digital phantoms. In this way, image analysis methods can be compared to each other and their quality can be computed. We have also evaluated the plausibility of the synthetic images, measured by their similarity to real image data.

Journal ArticleDOI
TL;DR: New methods for extracting features in low-resolution images in order to develop efficient registration techniques are proposed and the sampling theory of signals with finite rate of innovation is considered and some features of interest for registration can be retrieved perfectly in this framework, thus allowing an exact registration.
Abstract: The accurate registration of multiview images is of central importance in many advanced image processing applications. Image super-resolution, for example, is a typical application where the quality of the super-resolved image is degrading as registration errors increase. Popular registration methods are often based on features extracted from the acquired images. The accuracy of the registration is in this case directly related to the number of extracted features and to the precision at which the features are located: images are best registered when many features are found with a good precision. However, in low-resolution images, only a few features can be extracted and often with a poor precision. By taking a sampling perspective, we propose in this paper new methods for extracting features in low-resolution images in order to develop efficient registration techniques. We consider, in particular, the sampling theory of signals with finite rate of innovation and show that some features of interest for registration can be retrieved perfectly in this framework, thus allowing an exact registration. We also demonstrate through simulations that the sampling model which enables the use of finite rate of innovation principles is well suited for modeling the acquisition of images by a camera. Simulations of image registration and image super-resolution of artificially sampled images are first presented, analyzed and compared to traditional techniques. We finally present favorable experimental results of super-resolution of real images acquired by a digital camera available on the market.

Proceedings ArticleDOI
01 Sep 2009
TL;DR: An efficient algorithm is introduced that reconstructs 3D human poses as well as camera parameters from a small number of 2D point correspondences obtained from uncalibrated monocular images by identifying a set of new constraints and using them to eliminate the ambiguity of 3D pose reconstruction.
Abstract: This paper introduces an efficient algorithm that reconstructs 3D human poses as well as camera parameters from a small number of 2D point correspondences obtained from uncalibrated monocular images. This problem is challenging because 2D image constraints (e.g. 2D point correspondences) are often not sufficient to determine 3D poses of an articulated object. The key idea of this paper is to identify a set of new constraints and use them to eliminate the ambiguity of 3D pose reconstruction. We also develop an optimization process to simultaneously reconstruct both human poses and camera parameters from various forms of reconstruction constraints. We demonstrate the power and effectiveness of our system by evaluating the performance of the algorithm on both real and synthetic data. We show the algorithm can accurately reconstruct 3D poses and camera parameters from a wide variety of real images, including internet photos and key frames extracted from monocular video sequences.

Journal ArticleDOI
TL;DR: A novel modified FCM algorithm for image segmentation for dental plaque quantification indicates the FCM-AWA provides a quantitative, objective and efficient analysis of dental plaque, and possesses great promise.

Book ChapterDOI
18 Aug 2009
TL;DR: This work proposes a general variational framework for the problem of non-local image inPainting, from which several previous inpainting schemes can be derived, in addition to leading to novel ones.
Abstract: Non-local methods for image denoising and inpainting have gained considerable attention in recent years This is in part due to their superior performance in textured images, a known weakness of purely local methods Local methods on the other hand have demonstrated to be very appropriate for the recovering of geometric structure such as image edges The synthesis of both types of methods is a trend in current research Variational analysis in particular is an appropriate tool for a unified treatment of local and non-local methods In this work we propose a general variational framework for the problem of non-local image inpainting, from which several previous inpainting schemes can be derived, in addition to leading to novel ones We explicitly study some of these, relating them to previous work and showing results on synthetic and real images

Patent
08 Aug 2009
TL;DR: In this paper, a real image of known object target is projected onto image plane of optical sensor and vehicle hitching distance and offset from optical axis, normal to image plane origin, is related to real image size and offset.
Abstract: A target unit mounted on vehicle hitch ball contains a known object target and an optical unit mounted on trailer hitch socket contains an optical sensor. A real image of known object target is projected onto image plane of optical sensor. Vehicle hitching distance and offset from optical axis, normal to image plane origin, is related to real image size and offset from image plane origin respectively. Distance and offset are displayed to driver in the form of remaining distance and relative steering commands.

Journal ArticleDOI
TL;DR: A new method for segmenting closed contours and surfaces using a variant of the minimal path approach, which can be used for finding an open curve giving extra information as stopping criteria and applied to 3D data with promising results.
Abstract: In this paper, we present a new method for segmenting closed contours and surfaces. Our work builds on a variant of the minimal path approach. First, an initial point on the desired contour is chosen by the user. Next, new keypoints are detected automatically using a front propagation approach. We assume that the desired object has a closed boundary. This a-priori knowledge on the topology is used to devise a relevant criterion for stopping the keypoint detection and front propagation. The final domain visited by the front will yield a band surrounding the object of interest. Linking pairs of neighboring keypoints with minimal paths allows us to extract a closed contour from a 2D image. This approach can also be used for finding an open curve giving extra information as stopping criteria. Detection of a variety of objects on real images is demonstrated. Using a similar idea, we can extract networks of minimal paths from a 3D image called Geodesic Meshing. The proposed method is applied to 3D data with promising results.

Proceedings ArticleDOI
29 Jul 2009
TL;DR: This paper considers natural scenes statistics and adopt multi-resolution decomposition methods to extract reliable features for QA in no-reference image and video blur assessment and shows the algorithm has high correlation with human judgment in assessing blur distortion of images.
Abstract: The increasing number of demanding consumer video applications, as exemplified by cell phone and other low-cost digital cameras, has boosted interest in no-reference objective image and video quality assessment (QA). In this paper, we focus on no-reference image and video blur assessment. There already exist a number of no-reference blur metrics, but most are based on evaluating the widths of intensity edges, which may not reflect real image quality in many circumstances. Instead, we consider natural scenes statistics and adopt multi-resolution decomposition methods to extract reliable features for QA. First, a probabilistic support vector machine (SVM) is applied as a rough image quality evaluator; then the detail image is used to refine and form the final blur metric. The algorithm is tested on the LIVE Image Quality Database; the results show the algorithm has high correlation with human judgment in assessing blur distortion of images.

Proceedings ArticleDOI
20 Jun 2009
TL;DR: This work takes noisy images taken from different viewpoints as input and groups similar patches in the input images using depth estimation and uses the principal component analysis and tensor analysis to remove intensity-dependent noise in low-light conditions.
Abstract: We present a novel multi-view denoising algorithm. Our algorithm takes noisy images taken from different viewpoints as input and groups similar patches in the input images using depth estimation. We model intensity-dependent noise in low-light conditions and use the principal component analysis and tensor analysis to remove such noise. The dimensionalities for both PCA and tensor analysis are automatically computed in a way that is adaptive to the complexity of image structures in the patches. Our method is based on a probabilistic formulation that marginalizes depth maps as hidden variables and therefore does not require perfect depth estimation. We validate our algorithm on both synthetic and real images with different content. Our algorithm compares favorably against several state-of-the-art denoising algorithms.

Journal ArticleDOI
TL;DR: The Zernike moments are introduced into NL-means filter, which are the magnitudes of a set of orthogonal complex moments of the image which can get much more pixels or patches with higher similarity measure and make the similarity of patches translation-invariant and rotation- Invariant.

Proceedings ArticleDOI
03 Jun 2009
TL;DR: This paper introduces a novel method for vehicle pose estimation and motion tracking using visual features that is capable of robustly tracking the vehicle pose in geographical coordinates over time, using image data as the only input.
Abstract: This paper introduces a novel method for vehicle pose estimation and motion tracking using visual features. The method combines ideas from research on visual odometry with a feature map that is automatically generated from aerial images into a Visual Navigation System. Given an initial pose estimate, e.g. from a GPS receiver, the system is capable of robustly tracking the vehicle pose in geographical coordinates over time, using image data as the only input. Experiments on real image data have shown that the precision of the position estimate with respect to the feature map typically lies within only several centimeters. This makes the algorithm interesting for a wide range of applications like navigation, path planning or lane keeping.

Proceedings ArticleDOI
20 Jun 2009
TL;DR: It is shown that the model parameters can be recovered by fitting the deformable model to real images of vehicles, and both the shape and appearance of parts that deform in the 2-d manifold of the vehicle surface are recovered.
Abstract: In traffic surveillance applications a good prior model of vehicle shape and appearance is becoming increasingly more important for tracking, shape recovery, and recognition from video. The usefulness of 2-d vehicle models is limited to a fixed viewing direction; 3-d models are nearly always more suitable. Existing 3-d vehicle models are either generic but far too simple to utilize high resolution imagery, or far too complex and limited to specific vehicle instances. This paper presents a deformable vehicle model that spans these two extremes. The model is constructed with a multi-resolution approach to fit various image resolutions. At each resolution, a small number of parameters controls the deformation to accurately represent a wide variety of passenger vehicles. The parameters control both 3-d shape and appearance of parts that deform in the 2-d manifold of the vehicle surface. These parts are regions representing windows, headlights, taillights, etc. The combination of part boundaries and surface occluding contours account for the most consistent edges observed in images of vehicles. It is shown that the model parameters can be recovered by fitting the deformable model to real images of vehicles.

Journal ArticleDOI
TL;DR: This paper introduces adaptive polar transform (APT) technique and an innovative matching mechanism that serve as image processing tool for recovering scale and rotation change of the registered image.
Abstract: Image registration is an essential step in many image processing applications that need visual information from multiple images for comparison, integration, or analysis. Recently, researchers have introduced image registration techniques using the log-polar transform (LPT) for its rotation and scale invariant properties. However, it suffers from nonuniform sampling which makes it not suitable for applications in which the registered images are altered or occluded. Inspired by LPT, this paper presents a new registration algorithm that addresses the problems of the conventional LPT while maintaining the robustness to scale and rotation. We introduce a novel adaptive polar transform (APT) technique that evenly and effectively samples the image in the Cartesian coordinates. Combining APT with an innovative projection transform along with a matching mechanism, the proposed method yields less computational load and more accurate registration than that of the conventional LPT. Translation between the registered images is recovered with the new search scheme using Gabor feature extraction to accelerate the localization procedure. Moreover an image comparison scheme is proposed for locating the area where the image pairs differ. Experiments on real images demonstrate the effectiveness and robustness of the proposed approach for registering images that are subjected to occlusion and alteration in addition to scale, rotation, and translation.

Patent
26 Mar 2009
TL;DR: In this article, a mark which at least reflects or radiates invisible light of a predetermined wavelength other than visible light is added in a space of the real world, and a camera apparatus comprises image capturing means for capturing a real image in which an invisible-light image may be discriminating.
Abstract: A mark which at least reflects or radiates invisible light of predetermined wavelength other than visible light is added in a space of the real world. A camera apparatus comprises image capturing means for capturing a real image in which an invisible-light image may be discriminating. An image processing apparatus, comprises: mark discriminating means for discriminating at least one condition of a position of image of mark in the captured real image, a orientation of the mark, and a distance from the mark to the image capturing means, and overlaying condition determining means for determining, in correspondence with the discriminated condition, a overlaying condition which is at least one of an overlaying position which is a position of the image of the virtual object overlaid on the captured real image in the real image, an orientation of the virtual object which the image of the virtual object indicates, and a distance from a view point of viewer, of the image of the virtual object.

Patent
Hitoshi Hongo1
03 Feb 2009
TL;DR: In this article, an image processing device sets the positions of four reference points on an offset correction image to be generated from the camera image and performs a coordinate conversion based on a homography matrix.
Abstract: While extracting four feature points from a camera image obtained from a camera installed in a vehicle, an image processing device sets the positions of four reference points on an offset correction image to be generated from the camera image and performs a coordinate conversion based on a homography matrix so that the coordinate values of the four feature points are converted to the coordinate values of the four reference points. The image processing device sets each of the coordinate values so that an image center line and a vehicle center line are matched with each other on the offset correction image. The image processing device determines whether or not an image lacking area in which image data based on the image data of the camera image is not present is included within the entire area of the generated offset correction image. If the image lacking area is included, the two reference points or the four reference points are symmetrically moved in the left and right directions and the homography matrix is recalculated according to the positions of the reference points after the movement.

01 Jan 2009
TL;DR: This paper explores image segmentation using active contours model to detect oil spills using partial differential equation based level set method, which represents the spill surface as an implicit propagation interface, which allows the front interface to propagate naturally with topological changes, significant protrusions and narrow regions.
Abstract: In this paper we explore image segmentation using active contours model to detect oil spills. A partial differential equation based level set method, which represents the spill surface as an implicit propagation interface, is used. Starting from an initial estimation with priori information, the level set method creates a set of speed functions to detect the position of the propagation interface. Specifically, the image intensity gradient and the curvature are utilized together to determine the speed and direction of the propagation. This allows the front interface to propagate naturally with topological changes, significant protrusions and narrow regions, giving rise to stable and smooth boundaries that discriminate oil spills from the surrounding water. The proposed method has been illustrated by experiments to detect oil spills in real images. Its advantages over the traditional image segmentation approaches have also been demonstrated.

Journal ArticleDOI
TL;DR: A novel method is presented that provides an accurate and precise estimate of the length of the boundary (perimeter) of an object by taking into account gray levels on the boundary of the digitization of the same object, assuming a model where pixel intensity is proportional to the coverage of a pixel.
Abstract: We present a novel method that provides an accurate and precise estimate of the length of the boundary (perimeter) of an object by taking into account gray levels on the boundary of the digitization of the same object. Assuming a model where pixel intensity is proportional to the coverage of a pixel, we show that the presented method provides error-free measurements of the length of straight boundary segments in the case of nonquantized pixel values. For a more realistic situation, where pixel values are quantized, we derive optimal estimates that minimize the maximal estimation error. We show that the estimate converges toward a correct value as the number of gray levels tends toward infinity. The method is easy to implement; we provide the complete pseudocode. Since the method utilizes only a small neighborhood, it is very easy to parallelize. We evaluate the estimator on a set of concave and convex shapes with known perimeters, digitized at increasing resolution. In addition, we provide an example of applicability of the method on real images, by suggesting appropriate preprocessing steps and presenting results of a comparison of the suggested method with other local approaches.

Journal ArticleDOI
TL;DR: It is shown that the perspective shape of the circular landmark in the omni-directional image may be approximated by an ellipse by analytic formulas with good shape-fitting effect and fast computation speed for navigation guidance.

Patent
19 Feb 2009
TL;DR: In this article, a vehicle-peripheral image displaying system (a side view monitor system A1) comprises a side camera 1, a monitor 3 and an image processing controlling unit 2, wherein the image processing control unit 2 includes an image processor 43 configured to perform a viewpoint conversion of the actually shot camera image input from the side camera into a virtual camera image which is to be converted as if it is viewed from the driver's eye position.
Abstract: A vehicle-peripheral image displaying system (a side view monitor system A1) comprises a side camera 1, a monitor 3 and an image processing controlling unit 2, wherein the image processing controlling unit 2 includes an image processor 43 configured to perform a viewpoint conversion of the actually shot camera image input from the side camera 1 into a virtual camera image which is to be converted as if it is viewed from the driver's eye position, an image memory 44 configured to store a vehicle interior image which is previously shot from the driver's eye position as a vehicle interior image, and a superimposing circuit 46 configured to make the vehicle interior image translucent to form a translucent vehicle interior image, to perform an image composition such that the translucent vehicle interior image is superimposed on the virtual camera image, and to produce a composite image which represents the virtual camera image transparently through the translucent vehicle interior image.

Book ChapterDOI
01 Jan 2009
TL;DR: In this paper, the edge detector based on the zero crossings of the continuous Laplacian produces closed edge contours if the image, meets certain smoothness constraints, but edge strength is not considered, so even the slightest, most gradual intensity transition produces a zero crossing.
Abstract: Publisher Summary To use the gradient or the Laplacian approaches as the basis for practical image edge detectors, one must extend the process to two dimensions, adapt to the discrete case, and somehow deal with the difficulties presented by real images. Relative to the 1D edges, edges in 2D images have the additional quality of direction. One usually wishes to find edges regardless of direction, but a directionally sensitive edge detector can be useful at times. Also, the discrete nature of digital images requires the use of an approximation to the derivative. Finally, there are a number of problems that can confound the edge detection process in real images. These include noise, crosstalk or interference between nearby edges, and inaccuracies resulting from the use of a discrete grid. False edges, missing edges, and errors in edge location and orientation are often the result. An edge detector based solely on the zero crossings of the continuous Laplacian produces closed edge contours if the image, meets certain smoothness constraints. The contours are closed because edge strength is not considered, so even the slightest, most gradual intensity transition produces a zero crossing. In effect, the zero crossing contours define the boundaries that separate regions of nearly constant intensity in the original image.