scispace - formally typeset
Search or ask a question

Showing papers on "Real image published in 2010"


Journal ArticleDOI
TL;DR: A novel region-based active contour model (ACM) with SBGFRLS has the property of selective local or global segmentation, which is more efficient to construct than the widely used signed distance function (SDF).

710 citations


Journal ArticleDOI
TL;DR: Comparisons with the well-known Chan-Vese (CV) model and recent popular local binary fitting (LBF) model show that the proposed LCV model can segment images with few iteration times and be less sensitive to the location of initial contour and the selection of governing parameters.

558 citations


Journal ArticleDOI
TL;DR: A nonparametric regression method for denoising 3-D image sequences acquired via fluorescence microscopy and an original statistical patch-based framework for noise reduction and preservation of space-time discontinuities are presented.
Abstract: We present a nonparametric regression method for denoising 3-D image sequences acquired via fluorescence microscopy. The proposed method exploits the redundancy of the 3-D+time information to improve the signal-to-noise ratio of images corrupted by Poisson-Gaussian noise. A variance stabilization transform is first applied to the image-data to remove the dependence between the mean and variance of intensity values. This preprocessing requires the knowledge of parameters related to the acquisition system, also estimated in our approach. In a second step, we propose an original statistical patch-based framework for noise reduction and preservation of space-time discontinuities. In our study, discontinuities are related to small moving spots with high velocity observed in fluorescence video-microscopy. The idea is to minimize an objective nonlocal energy functional involving spatio-temporal image patches. The minimizer has a simple form and is defined as the weighted average of input data taken in spatially-varying neighborhoods. The size of each neighborhood is optimized to improve the performance of the pointwise estimator. The performance of the algorithm (which requires no motion estimation) is then evaluated on both synthetic and real image sequences using qualitative and quantitative criteria.

299 citations


Journal ArticleDOI
TL;DR: A low-rank approximation to this interaction tensor that uses a sum of factors, each of which is a three-way outer product, which allows efficient learning of transformations between larger image patches and demonstrates the learning of optimal filter pairs from various synthetic and real image sequences.
Abstract: To allow the hidden units of a restricted Boltzmann machine to model the transformation between two successive images, Memisevic and Hinton (2007) introduced three-way multiplicative interactions that use the intensity of a pixel in the first image as a multiplicative gain on a learned, symmetric weight between a pixel in the second image and a hidden unit This creates cubically many parameters, which form a three-dimensional interaction tensor We describe a low-rank approximation to this interaction tensor that uses a sum of factors, each of which is a three-way outer product This approximation allows efficient learning of transformations between larger image patches Since each factor can be viewed as an image filter, the model as a whole learns optimal filter pairs for efficiently representing transformations We demonstrate the learning of optimal filter pairs from various synthetic and real image sequences We also show how learning about image transformations allows the model to perform a simple visual analogy task, and we show how a completely unsupervised network trained on transformations perceives multiple motions of transparent dot patterns in the same way as humans

263 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: This paper presents a new approach for multi-view object class detection which uses a part model which discriminatively learns the object appearance with spatial pyramids from a database of real images, and encodes the 3D geometry of the object class with a generative representation built from adatabase of synthetic models.
Abstract: This paper presents a new approach for multi-view object class detection. Appearance and geometry are treated as separate learning tasks with different training data. Our approach uses a part model which discriminatively learns the object appearance with spatial pyramids from a database of real images, and encodes the 3D geometry of the object class with a generative representation built from a database of synthetic models. The geometric information is linked to the 2D training data and allows to perform an approximate 3D pose estimation for generic object classes. The pose estimation provides an efficient method to evaluate the likelihood of groups of 2D part detections with respect to a full 3D geometry model in order to disambiguate and prune 2D detections and to handle occlusions. In contrast to other methods, neither tedious manual part annotation of training images nor explicit appearance matching between synthetic and real training data is required, which results in high geometric fidelity and in increased flexibility. On the 3D Object Category datasets CAR and BICYCLE [15], the current state-of-the-art benchmark for 3D object detection, our approach outperforms previously published results for viewpoint estimation.

247 citations


Journal ArticleDOI
TL;DR: A novel method ICF (Identifying point correspondence by Correspondence Function) is proposed for rejecting mismatches from given putative point correspondences, and it is applicable to images of rigid objects or images of non-rigid objects with unknown deformation.
Abstract: A novel method ICF (Identifying point correspondences by Correspondence Function) is proposed for rejecting mismatches from given putative point correspondences. By analyzing the connotation of homography, we introduce a novel concept of correspondence function for two images of a general 3D scene, which captures the relationships between corresponding points by mapping a point in one image to its corresponding point in another. Since the correspondence functions are unknown in real applications, we also study how to estimate them from given putative correspondences, and propose an algorithm IECF (Iteratively Estimate Correspondence Function) based on diagnostic technique and SVM. Then, the proposed ICF method is able to reject the mismatches by checking whether they are consistent with the estimated correspondence functions. Extensive experiments on real images demonstrate the excellent performance of our proposed method. In addition, the ICF is a general method for rejecting mismatches, and it is applicable to images of rigid objects or images of non-rigid objects with unknown deformation.

221 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: A new parametrized geometric model of the blurring process in terms of the rotational velocity of the camera during exposure is proposed, which makes it possible to model and remove a wider class of blurs than previous approaches, including uniform blur as a special case.
Abstract: Blur from camera shake is mostly due to the 3D rotation of the camera, resulting in a blur kernel that can be significantly non-uniform across the image. However, most current deblurring methods model the observed image as a convolution of a sharp image with a uniform blur kernel. We propose a new parametrized geometric model of the blurring process in terms of the rotational velocity of the camera during exposure. We apply this model to two different algorithms for camera shake removal: the first one uses a single blurry image (blind deblurring), while the second one uses both a blurry image and a sharp but noisy image of the same scene. We show that our approach makes it possible to model and remove a wider class of blurs than previous approaches, including uniform blur as a special case, and demonstrate its effectiveness with experiments on real images.

189 citations


Journal ArticleDOI
TL;DR: A robust FFT-based approach to scale-invariant image registration and introduces the normalized gradient correlation, which shows that, using image gradients to perform correlation, the errors induced by outliers are mapped to a uniform distribution for which it features robust performance.
Abstract: We present a robust FFT-based approach to scale-invariant image registration. Our method relies on FFT-based correlation twice: once in the log-polar Fourier domain to estimate the scaling and rotation and once in the spatial domain to recover the residual translation. Previous methods based on the same principles are not robust. To equip our scheme with robustness and accuracy, we introduce modifications which tailor the method to the nature of images. First, we derive efficient log-polar Fourier representations by replacing image functions with complex gray-level edge maps. We show that this representation both captures the structure of salient image features and circumvents problems related to the low-pass nature of images, interpolation errors, border effects, and aliasing. Second, to recover the unknown parameters, we introduce the normalized gradient correlation. We show that, using image gradients to perform correlation, the errors induced by outliers are mapped to a uniform distribution for which our normalized gradient correlation features robust performance. Exhaustive experimentation with real images showed that, unlike any other Fourier-based correlation techniques, the proposed method was able to estimate translations, arbitrary rotations, and scale factors up to 6.

149 citations


Patent
30 Apr 2010
TL;DR: An entertainment device for combining virtual images with real images captured by a video camera so as to generate augmented reality images is described in this paper. But it is not shown how to generate the images.
Abstract: An entertainment device for combining virtual images with real images captured by a video camera so as to generate augmented reality images. The device comprises receiving means operable to receive a sequence of video images from the video camera via a communications link. The device further comprises detecting means operable to detect an augmented reality marker within the received video images, and processing means operable to generate a virtual image plane in dependence upon the detection of the augmented reality marker by the detecting means. The virtual image plane is arranged to be substantially coplanar with a real surface upon which the augmented reality marker is placed so that virtual images may be generated with respect to the real surface. The processing means is operable to generate the virtual image plane within the captured video images such that the virtual image plane is defined with respect to the detected augmented reality marker, and the virtual image plane has an area which is greater than an area which corresponds to the augmented reality marker.

136 citations


Patent
Chi-Wei Chiu1
21 Apr 2010
TL;DR: An imaging sub-system, a liquid crystal (LC) element, and a digital focus processor are provided in this article, where the LC element is placed in the light path of the imaging sub system and includes a periodically patterned electrode which is patterned according to a periodical modulation function.
Abstract: An imaging sub-system, a liquid crystal (LC) element, and a digital focus processor are provided. The LC element is placed in the light path of the imaging sub-system, functioning as the aperture of the imaging sub-system, and includes a periodically patterned electrode which is patterned according to a periodical modulation function and configured to blur an intermediate image captured by the imaging sub-system by applying a controllable voltage thereto. The digital focus processor is configured to deconvolute the periodical modulation function to remove the blur away from the intermediate image and determine an all-in-focus real image.

99 citations


Patent
29 Jun 2010
TL;DR: In this article, a set of virtual cameras are defined for each sphere on a line joining a center of the sphere and the center of projection of the camera, where each camera has a different virtual viewpoint and an associated cone of rays, appearing as a circle of pixels on its virtual image plane.
Abstract: A single camera acquires an input image of a scene as observed in an array of spheres, wherein pixels in the input image corresponding to each sphere form a sphere image. A set of virtual cameras are defined for each sphere on a line joining a center of the sphere and a center of projection of the camera, wherein each virtual camera has a different virtual viewpoint and an associated cone of rays, appearing as a circle of pixels on its virtual image plane. A projective texture mapping of each sphere image is applied to all of the virtual cameras on the virtual image plane to produce a virtual camera image comprising circle of pixels. Each virtual camera image for each sphere is then projected to a refocusing geometry using a refocus viewpoint to produce a wide-angle lightfield view, which are averaged to produce a refocused wide-angle image.

Journal ArticleDOI
TL;DR: This paper solves image inpainting problems by using two separate tight frame systems which can sparsely represent cartoons and textures respectively and derives iterative algorithms to find their solutions and prove their convergence.
Abstract: Real images usually have two layers, namely, cartoons (the piecewise smooth part of the image) and textures (the oscillating pattern part of the image). Both these two layers have sparse approximations under some tight frame systems such as framelet, translation invariant wavelet, curvelet, and local DCTs. In this paper, we solve image inpainting problems by using two separate tight frame systems which can sparsely represent cartoons and textures respectively. Different from existing schemes in the literature which are either analysis-based or synthesis-based sparsity priors, our minimization formulation balances these two priors. We also derive iterative algorithms to find their solutions and prove their convergence. Numerical simulation examples are given to demonstrate the applicability and usefulness of our proposed algorithms in image inpainting.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper presented a novel method of automated concrete column detection from visual data by combining columns' boundary information with their color and texture cues, which can be used to facilitate many construction and maintenance applications.
Abstract: The automated detection of structural elements (e.g., columns and beams) from visual data can be used to facilitate many construction and maintenance applications. The research in this area is under initial investigation. The existing methods solely rely on color and texture information, which makes them unable to identify each structural element if these elements connect each other and are made of the same material. The paper presents a novel method of automated concrete column detection from visual data. The method overcomes the limitation by combining columns' boundary information with their color and texture cues. It starts from recognizing long vertical lines in an image/video frame through edge detection and Hough transform. The bounding rectangle for each pair of lines is then constructed. When the rectangle resembles the shape of a column and the color and texture contained in the pair of lines are matched with one of the concrete samples in knowledge base, a concrete column surface is assumed to be located. This way, one concrete column in images/videos is detected. The method was tested using real images/videos. The results are compared with the manual detection ones to indicate the method's validity.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: The basic idea is to use tentative point correspondences, which can be easily obtained by keypoint matching methods, to significantly improve line matching performance, even when the point correspondence are severely contaminated by outliers.
Abstract: A novel method for line matching is proposed. The basic idea is to use tentative point correspondences, which can be easily obtained by keypoint matching methods, to significantly improve line matching performance, even when the point correspondences are severely contaminated by outliers. When matching a pair of image lines, a group of corresponding points that may be coplanar with these lines in 3D space is firstly obtained from all corresponding image points in the local neighborhoods of these lines. Then given such a group of corresponding points, the similarity between this pair of lines is calculated based on an affine invariant from one line and two points. The similarity is defined on the basis of median statistic in order to handle the problem of inevitable incorrect correspondences in the group of point correspondences. Furthermore, the relationship of rotation between the reference and query images is estimated from all corresponding points to filter out those pairs of lines which are obviously impossible to be matches, hence speeding up the matching process as well as further improving its robustness. Extensive experiments on real images demonstrate the good performance of the proposed method as well as its superiority to the state-of-the-art methods.

Patent
14 Sep 2010
TL;DR: In this article, an object is first located using a low-resolution camera and then a second camera is directed at the object's location using a steerable mirror assembly to capture a high-resolution image at a location where the object is thought to be based on image acquired by the wide-angle camera.
Abstract: Objects of interest are detected and identified using multiple cameras having varying resolution and imaging parameters. An object is first located using a low resolution camera. A second camera (or lens) is then directed at the object's location using a steerable mirror assembly to capture a high-resolution image at a location where the object is thought to be based on image acquired by the wide-angle camera. Various image processing algorithms may be applied to confirm the presence of the object in the telephoto image. If an object is detected and the image is of sufficiently high quality, detailed facial, alpha-numeric, or other pattern recognition techniques may be applied to the image.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: A novel method to recover a sharp image from a pair of motion blurred and flash images, consecutively captured using a hand-held camera is proposed, leading to an accurate blur kernel and a reconstructed image with fine image details.
Abstract: Motion blur due to camera shake is an annoying yet common problem in low-light photography. In this paper, we propose a novel method to recover a sharp image from a pair of motion blurred and flash images, consecutively captured using a hand-held camera. We first introduce a robust flash gradient constraint by exploiting the correlation between a sharp image and its corresponding flash image. Then we formulate our flash deblurring as solving a maximum-a-posteriori problem under the flash gradient constraint. We solve the problem by performing kernel estimation and non-blind deconvolution iteratively, leading to an accurate blur kernel and a reconstructed image with fine image details. Experiments on both synthetic and real images show the superiority of our method compared with existing methods.

Patent
05 Mar 2010
TL;DR: In this paper, a signal control unit (120) is used for receiving an input of an image signal, and converting to a signal for displaying each of a right eye image and a left eye image at least two times continuously.
Abstract: An image display apparatus (100) of present invention includes a signal control unit (120) for receiving an input of an image signal, and converting to a signal for displaying each of a right eye image and a left eye image at least two times continuously; and a display panel (132), input with the signal converted by the signal control unit (120), for alternately displaying the right eye image continuing two or more times, and the left eye image continuing two or more times.

Proceedings ArticleDOI
Yan Wang1, Bo Wu1
06 Dec 2010
TL;DR: An improved single image dehazing algorithm which based on the atmospheric scattering physics-based models is introduced, which applies the local dark channel prior on selected region to estimate the atmospheric light, and obtain more accurate result.
Abstract: Atmospheric conditions induced by suspended particles, such as fog and haze, severely degrade image quality. Haze removal from a single image of a weather-degraded scene remains a challenging task, because the haze is dependent on the unknown depth information. In this paper, we introduce an improved single image dehazing algorithm which based on the atmospheric scattering physics-based models. We apply the local dark channel prior on selected region to estimate the atmospheric light, and obtain more accurate result. Experiments on real images validate our approach.

Journal ArticleDOI
TL;DR: The effectiveness of the method, which uses an active curve objective functional with two terms: an original term which evaluates the deviation of the mapped image data within each segmentation region from the piecewise constant model and a classic length regularization term for smooth region boundaries, is verified.
Abstract: This study investigates level set multiphase image segmentation by kernel mapping and piecewise constant modeling of the image data thereof. A kernel function maps implicitly the original data into data of a higher dimension so that the piecewise constant model becomes applicable. This leads to a flexible and effective alternative to complex modeling of the image data. The method uses an active curve objective functional with two terms: an original term which evaluates the deviation of the mapped image data within each segmentation region from the piecewise constant model and a classic length regularization term for smooth region boundaries. Functional minimization is carried out by iterations of two consecutive steps: 1) minimization with respect to the segmentation by curve evolution via Euler-Lagrange descent equations and 2) minimization with respect to the regions parameters via fixed point iterations. Using a common kernel function, this step amounts to a mean shift parameter update. We verified the effectiveness of the method by a quantitative and comparative performance evaluation over a large number of experiments on synthetic images, as well as experiments with a variety of real images such as medical, satellite, and natural images, as well as motion maps.

Book ChapterDOI
29 Nov 2010
TL;DR: The new proposed method based on the region-scalable model can draw upon intensity information in local regions at a controllable scale, so that it can segment images with intensity inhomogeneity.
Abstract: In this paper, we incorporate the global convex segmentation method and the split Bregman technique into the region-scalable fitting energy model. The new proposed method based on the region-scalable model can draw upon intensity information in local regions at a controllable scale, so that it can segment images with intensity inhomogeneity. Furthermore, with the application of the global convex segmentation method and the split Bregman technique, the method is very robust and efficient. By using a non-negative edge detector function to the proposed method, the algorithm can detect the boundaries more easily and achieve results that are very similar to those obtained through the classical geodesic active contour model. Experimental results for synthetic and real images have shown the robustness and efficiency of our method and also demonstrated the desirable advantages of the proposed method.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: An approach that combines optical flow and image denoising algorithms for HDR imaging, which enables capturing sharp HDR images using handheld cameras for complex scenes with large depth variation.
Abstract: New cameras such as the Canon EOS 7D and Pointgrey Grasshopper have 14-bit sensors. We present a theoretical analysis and a practical approach that exploit these new cameras with high-resolution quantization for reliable HDR imaging from a moving camera. Specifically, we propose a unified probabilistic formulation that allows us to analytically compare two HDR imaging alternatives: (1) deblurring a single blurry but clean image and (2) denoising a sequence of sharp but noisy images. By analyzing the uncertainty in the estimation of the HDR image, we conclude that multi-image denoising offers a more reliable solution. Our theoretical analysis assumes translational motion and spatially-invariant blur. For practice, we propose an approach that combines optical flow and image denoising algorithms for HDR imaging, which enables capturing sharp HDR images using handheld cameras for complex scenes with large depth variation. Quantitative evaluation on both synthetic and real images is presented.

Patent
27 Aug 2010
TL;DR: A spectacles-type image display device comprises an image output unit for outputting image light of images to be displayed and a reflection unit disposed in a field of view of at least one eyeball of a viewer as discussed by the authors.
Abstract: A spectacles-type image display device comprises an image output unit for outputting image light of images to be displayed and a reflection unit disposed in a field of view of at least one eyeball of a viewer. The reflection unit is adapted to reflect the image light output from the image output unit toward the eyeball of the viewer so that the viewer can see virtual images of the images. The minimum value of a width of a projection cross-section of the reflection unit in an output direction of the image light to the eyeball is smaller than a dark-adapted pupil diameter of human and is larger than a light-adapted pupil diameter of human.

Book ChapterDOI
05 Sep 2010
TL;DR: This work proposes to generalize the classical photoconsistency-weighted minimal surface approach by means of an anisotropic metric which allows to integrate a specified surface orientation into the optimization process.
Abstract: In this work the weighted minimal surface model traditionally used in multiview stereo is revisited. We propose to generalize the classical photoconsistency-weighted minimal surface approach by means of an anisotropic metric which allows to integrate a specified surface orientation into the optimization process. In contrast to the conventional isotropic case, where all spatial directions are treated equally, the anisotropic metric adaptively weights the regularization along different directions so as to favor certain surface orientations over others. We show that the proposed generalization preserves all properties and globality guarantees of continuous convex relaxation methods. We make use of a recently introduced efficient primal-dual algorithm to solve the arising saddle point problem. In multiple experiments on real image sequences we demonstrate that the proposed anisotropic generalization allows to overcome oversmoothing of small-scale surface details, giving rise to more precise reconstructions.

Journal ArticleDOI
Wei Zhang1, Xiaochun Cao1, Yanling Qu1, Yuexian Hou1, Handong Zhao1, Chenyang Zhang1 
TL;DR: An automatic fake region detection method based on the planar homography constraint, and an automatic extraction method using graph cut with online feature/parameter selection are proposed.
Abstract: With the advancement of photo and video editing tools, it has become fairly easy to tamper with photos and videos. One common way is to insert visually plausible composites into target images and videos. In this paper, we propose an automatic fake region detection method based on the planar homography constraint, and an automatic extraction method using graph cut with online feature/parameter selection. Two steps are taken in our method: 1) the targeting step, and 2) the segmentation step. First, the fake region is located roughly by enforcing the planar homography constraint. Second, the fake object is segmented via graph cut with the initialization given by the targeting step. To achieve an automatic segmentation, the optimal features and parameters for graph cut are dynamically selected via the proposed online feature/parameter selection. Performance of this method is evaluated on both semisimulated and real images. Our method works efficiently on images as long as there are regions satisfying the planar homography constraint, including image pairs captured by the approximately cocentered cameras, image pairs photographing planar or distant scenes, and a single image with duplications.

Journal ArticleDOI
TL;DR: Test results demonstrate that the ring artifacts can be more effectively suppressed using the proposed iterative center weighted median filter as compared to other ring removal techniques reported in the literature.

Book ChapterDOI
29 Nov 2010
TL;DR: This paper presents a novel method to quickly, accurately and simultaneously estimate three orthogonal vanishing points (TOVPs) and focal length from single images, which decomposes a 2D Hough parameter space into two cascaded 1DHough parameter spaces, which makes the method much faster and more robust than previous methods without losing accuracy.
Abstract: For images taken in man-made scenes, vanishing points and focal length of camera play important roles in scene understanding. In this paper, we present a novel method to quickly, accurately and simultaneously estimate three orthogonal vanishing points (TOVPs) and focal length from single images. Our method is based on the following important observations: If we establish a polar coordinate system on the image plane whose origin is at the image center, angle coordinates of vanishing points can be robustly estimated by seeking peaks in a histogram. From the detected angle coordinates, altitudes of a triangle formed by TOVPs are determined. Novel constraints on both vanishing points and focal length could be obtained from the three altitudes. By using the constraints, radial coordinates of TOVPs and focal length can be estimated simultaneously. Our method decomposes a 2D Hough parameter space into two cascaded 1D Hough parameter spaces, which makes our method much faster and more robust than previous methods without losing accuracy. Enormous experiments on real images have been done to test feasibility and correctness of our method.

Journal ArticleDOI
TL;DR: Estimation of the distortion centre of an equidistant fish-eye camera can be estimated by the extraction of the vanishing points using an equation that describes the projection of a straight line, and it is demonstrated how the shape of a projected straight line can be accurately described by arcs of circles on the distorted image plane.

Journal ArticleDOI
TL;DR: The objective of this paper is to explain a new method that combines Gamma distribution with the technique of ISODATA, which has two phases: splitting using Gamma distribution then merging which are done based on some predefined parameters.
Abstract: Image segmentation is a fundamental step in many applications of image processing. Many image segmentation techniques exist based on different methods such as classification-based methods, edge-based methods, region-based methods, and hybrid methods. The principal approach of segmentation is based on thresholding (classification) that is related to thresholds estimation problem. The ISODATA (Iterative Self-Organizing Data Analysis Technique) method is one of the classification-based methods in image segmentation. We assumed that the data in images is modeled by Gamma distribution. The objective of this paper is to explain a new method that combines Gamma distribution with the technique of ISODATA. The algorithm has two phases: splitting using Gamma distribution then merging which are done based on some predefined parameters. Experimental results showed good segmentation for artificial and real images.

Journal ArticleDOI
TL;DR: Experiments reported here suggest that, although some junctions in real images are locally defined and can be detected with simple mechanisms, a substantial fraction necessitate the use of more complex and global processes, and raises the possibility that junications in such cases may not be detected prior to scene interpretation.
Abstract: Junctions, formed at the intersection of image contours, are thought to play an important and early role in vision. The interest in junctions can be attributed in part to the notion that they are local image features that are easy to detect but that nonetheless provide valuable information about important events in the world, such as occlusion and transparency. Here I test the notion that there are locally defined junctions in real images that might be detected with simple, early visual mechanisms. Human observers were used as a tool to measure the visual information available in local regions of real images. One set of observers was made to label all the points in a set of real images where one edge occluded another. A second set of observers was presented with variable-size circular subregions of these images, and was asked to judge whether the regions were centered on an occlusion point. This task is easy if junctions are visible, but I found performance to be poor for small regions, not approaching ceiling levels until observers were given fairly large (approximately 50 pixels in diameter) regions over which to make the judgment. Control experiments ruled out the possibility that the effects are just due to junctions at multiple scales. Experiments reported here suggest that, although some junctions in real images are locally defined and can be detected with simple mechanisms, a substantial fraction necessitate the use of more complex and global processes. This raises the possibility that junctions in such cases may not be detected prior to scene interpretation.

Patent
Jin-Wook Kwon1, Mun-Kue Park1, Jung-Kee Lee1, Seong-taek Hwang1, Mu-Sik Kwon1, Joung-Min Seo1 
28 Oct 2010
TL;DR: In this article, a mobile terminal for providing a blackboard function to an image projected through a projector is described, which includes a projector for projecting an image on a screen, a camera for photographing the projected image and transferring the photographed image to a processor.
Abstract: Disclosed is a mobile terminal for providing a blackboard function to an image projected through a projector, which includes a projector for projecting an image on a screen, a camera for photographing the image projected through the projector and transferring the photographed image to an image processor. The image processor processes the image transferred from the camera and transfers the processed image to a controller. The controller controls the operation of the mobile terminal including recognizing an effective region for the projected image where infrared rays are detected from the photographed image so as to recognize an indication position of an indication device. The indication device generates infrared rays, in the projected image, and when the infrared rays are detected, a preset mark is displayed on a position of the detected infrared rays by using the projector.