scispace - formally typeset
Search or ask a question

Showing papers on "Distance transform published in 2012"


Journal ArticleDOI
TL;DR: In this paper, linear-time algorithms for solving a class of problems that involve transforming a cost function on a grid using spatial information are described, where the binary image is replaced by an arbitrary function on the grid.
Abstract: We describe linear-time algorithms for solving a class of problems that involve transforming a cost function on a grid using spatial information. These problems can be viewed as a generalization of classical distance transforms of binary images, where the binary image is replaced by an arbitrary function on a grid. Alternatively they can be viewed in terms of the minimum convolution of two functions, which is an important operation in grayscale morphology. A consequence of our techniques is a simple and fast method for computing the Euclidean distance transform of a binary image. Our algorithms are also applicable to Viterbi decoding, belief propagation, and optimal control.

925 citations


Journal ArticleDOI
TL;DR: This work presents a practical vision-based robotic bin-picking system that performs detection and three-dimensional pose estimation of objects in an unstructured bin using a novel camera design, picks up parts from the bin, and performs error detection and pose correction while the part is in the gripper.
Abstract: We present a practical vision-based robotic bin-picking system that performs detection and three-dimensional pose estimation of objects in an unstructured bin using a novel camera design, picks up parts from the bin, and performs error detection and pose correction while the part is in the gripper. Two main innovations enable our system to achieve real-time robust and accurate operation. First, we use a multi-flash camera that extracts robust depth edges. Second, we introduce an efficient shape-matching algorithm called fast directional chamfer matching (FDCM), which is used to reliably detect objects and estimate their poses. FDCM improves the accuracy of chamfer matching by including edge orientation. It also achieves massive improvements in matching speed using line-segment approximations of edges, a three-dimensional distance transform, and directional integral images. We empirically show that these speedups, combined with the use of bounds in the spatial and hypothesis domains, give the algorithm sub...

190 citations


Proceedings ArticleDOI
29 Oct 2012
TL;DR: Quantitative results on synthetic depth sequences show the proposed scheme can track the fingertips quite accurately, and its capabilities are further demonstrated through a real-life human-computer interaction application.
Abstract: We present a vision-based approach for robust 3D fingertip and palm tracking on depth images using a single Kinect sensor. First the hand is segmented in the depth images by applying depth and morphological constraints. The palm is located by performing distance transform to the hand contour and tracked with a Kalman filter. The fingertips are detected by combining three depth-based features and tracked with a particle filter over successive frames. Quantitative results on synthetic depth sequences show the proposed scheme can track the fingertips quite accurately. Besides, its capabilities are further demonstrated through a real-life human-computer interaction application.

85 citations


Patent
James D. Lynch1
16 Oct 2012
TL;DR: In this article, the authors describe a three-dimensional routing system based on a viewer perspective from a memory, where the image data is correlated with a depth map generated from an optical distancing system and correlated with route data calculated from an origin point to a destination point using a geographical database and routing algorithm.
Abstract: One or more systems, devices, and/or methods for three dimensional routing are disclosed. For example, one embodiment includes receiving image data selected based on a viewer perspective from a memory. The image data is correlated with a depthmap generated from an optical distancing system and correlated with route data calculated from an origin point to a destination point using a geographical database and a routing algorithm. The controller compares a first distance, from the viewer perspective to a point correlated with the route data, to a second distance, derived from the depth map at the point. If the comparison indicates that the first distance is closer to the viewer perspective than the second distance, the controller inserts at least one pixel of a navigation illustration into the image data. The image data including the navigation illustration is transmitted to or stored in a memory.

63 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used a distance map to automatically calculate the biovolume of a planktonic organism from its two-dimensional boundary, and then adjusted the resulting volume by a multiplicative factor assuming locally circular cross-sections in the third dimension.
Abstract: We describe and evaluate an algorithm that uses a distance map to automatically calculate the biovolume of a planktonic organism from its two-dimensional boundary. Compared with existing approaches, this algorithm dramatically increases the speed and accuracy of biomass estimates from plankton images, and is thus especially suited for use with automated cell imaging technologies that produce large quantities of data. The algorithm operates on a two-dimensional image processed to identify organism boundaries. First, the distance of each interior pixel to the nearest boundary is calculated; next these same distances are assumed to apply for projection in the third dimension; and finally the resulting volume is adjusted by a multiplicative factor assuming locally circular cross-sections in the third dimension. Other cross-sectional shape factors can be applied as needed. In this way, the simple, computationally efficient, volume calculation can be refined to include taxon-specific shape information if available. We show that compared to traditional manual microscopic analysis, the distance map algorithm is unbiased and accurate (mean difference = −0.25%, standard deviation = 17%) for a range of cell morphologies, including those with concave boundaries that deviate from simple geometric shapes and whose volumes are not well represented by a solid of revolution around a single axis. Automated calculation of cell volumes can now be implemented with a combination of this new distance map algorithm for complex shapes and the solid of revolution approach for simple shapes, with an automated decision criterion to choose the appropriate approach for each image.

62 citations


Journal ArticleDOI
TL;DR: A robust and accurate algorithm for interactive image segmentation that improves the performance of both the probabilistic classifier and the level set method over multiple passes and makes the final object segmentation less sensitive to user interactions.
Abstract: In this paper, we present a robust and accurate algorithm for interactive image segmentation. The level set method is clearly advantageous for image objects with a complex topology and fragmented appearance. Our method integrates discriminative classification models and distance transforms with the level set method to avoid local minima and better snap to true object boundaries. The level set function approximates a transformed version of pixelwise posterior probabilities of being part of a target object. The evolution of its zero level set is driven by three force terms, region force, edge field force, and curvature force. These forces are based on a probabilistic classifier and an unsigned distance transform of salient edges. We further propose a technique that improves the performance of both the probabilistic classifier and the level set method over multiple passes. It makes the final object segmentation less sensitive to user interactions. Experiments and comparisons demonstrate the effectiveness of our method.

58 citations


Journal ArticleDOI
TL;DR: A new approach to shape representation called a composite adaptively sampled distance field (composite ADF) is described and its application to NC milling simulation and an implementation of 3 and 5-axis milling Simulation is described.
Abstract: We describe a new approach to shape representation called a composite adaptively sampled distance field (composite ADF) and describe its application to NC milling simulation. In a composite ADF each shape is represented by an analytic or procedural signed Euclidean distance field and the milled workpiece is given as the Boolean difference between distance fields representing the original workpiece volume and distance fields representing the volumes of the milling tool swept along the prescribed milling path. The computation of distance field of the swept volume of a milling tool is handled by an inverted trajectory approach where the problem is solved in tool coordinate frame instead of a world coordinate frame. An octree bounding volume hierarchy is used to sample the distance functions and provides spatial localization of geometric operations thereby dramatically increasing the speed of the system. The new method enables very fast simulation, especially of free-form surfaces, with accuracy better than 1 micron, and low memory requirements. We describe an implementation of 3 and 5-axis milling simulation.

56 citations


Patent
24 Apr 2012
TL;DR: In this article, a feature point in a plurality of objects present around a vehicle is detected by feature point detection unit (35), and a first distance from the feature point to the image capturing unit (2) is calculated from the change of the feature points over time.
Abstract: On the basis of an image captured by an image capturing unit (2), a feature point in a plurality of objects present around a vehicle is detected by a feature point detection unit (35), and a first distance from the feature point to the image capturing unit (2) is calculated from the change of the feature point over time. Further, using some pixels of a specific object included in the plurality of objects present in the image, a second distance from the specific object to the image capturing unit is calculated. On the basis of the ratio between the first distance and the second distance, the first distances of a plurality of feature points other than a specific feature point are corrected, the plurality of feature points being detected simultaneously by the feature point detection unit (35).

48 citations


Proceedings ArticleDOI
29 Oct 2012
TL;DR: This paper effectively utilize distance transform (DT) features to bridge the gap between query sketches and natural images and achieves very competitive retrieval performance with MindFinder approach but only requires much less memory storage.
Abstract: The advent of touch panels in mobile devices has provided a good platform for mobile sketch search. However, most of the previous sketch image retrieval systems usually adopt an inverted index structure on large-scale image database, which is formidable to be operated in the limited memory of mobile devices. In this paper, we propose a novel approach to address these challenges. First, we effectively utilize distance transform (DT) features to bridge the gap between query sketches and natural images. Then these high-dimensional DT features are further projected to more compact binary hash bits. The experimental results show that our method achieves very competitive retrieval performance with MindFinder approach [3] but only requires much less memory storage (e.g., our method only requires 3% of total memory storage of MindFinder in 2.1 million images). Due to its low consumption of memory, the whole system can independently operate on the mobile devices.

40 citations


Journal ArticleDOI
TL;DR: A 3D analysis method is proposed for second harmonic generation images of collagen fibrils, based on a recently developed 3D fibre quantification method, that brings the possibility to monitor remodelling of collagen tissues upon a variety of injuries and to guide tissues engineering.
Abstract: The recent booming of multiphoton imaging of collagen fibrils by means of second harmonic generation microscopy generates the need for the development and automation of quantitative methods for image analysis. Standard approaches sequentially analyse two-dimensional (2D) slices to gain knowledge on the spatial arrangement and dimension of the fibrils, whereas the reconstructed three-dimensional (3D) image yields better information about these characteristics. In this work, a 3D analysis method is proposed for second harmonic generation images of collagen fibrils, based on a recently developed 3D fibre quantification method. This analysis uses operators from mathematical morphology. The fibril structure is scanned with a directional distance transform. Inertia moments of the directional distances yield the main fibre orientation, corresponding to the main inertia axis. The collaboration of directional distances and fibre orientation delivers a geometrical estimate of the fibre radius. The results include local maps as well as global distribution of orientation and radius of the fibrils over the 3D image. They also bring a segmentation of the image into foreground and background, as well as a classification of the foreground pixels into the preferred orientations. This accurate determination of the spatial arrangement of the fibrils within a 3D data set will be most relevant in biomedical applications. It brings the possibility to monitor remodelling of collagen tissues upon a variety of injuries and to guide tissues engineering because biomimetic 3D organizations and density are requested for better integration of implants.

34 citations


Journal ArticleDOI
TL;DR: An overview of the most important methods that decompose an arbitrary binary object into a union of rectangles is presented and it is shown that the choice is always a compromise between the complexity and time/memory consumption.

Proceedings ArticleDOI
16 Jun 2012
TL;DR: This work exploits the linearity of the Schrödinger equation to design fast discrete convolution methods using the FFT to compute the distance transform, derive the histogram of oriented gradients (HOG) via the squared magnitude of the Fourier transform of the wave function.
Abstract: Despite the ubiquitous use of distance transforms in the shape analysis literature and the popularity of fast marching and fast sweeping methods — essentially Hamilton-Jacobi solvers, there is very little recent work leveraging the Hamilton-Jacobi to Schrodinger connection for representational and computational purposes. In this work, we exploit the linearity of the Schrodinger equation to (i) design fast discrete convolution methods using the FFT to compute the distance transform, (ii) derive the histogram of oriented gradients (HOG) via the squared magnitude of the Fourier transform of the wave function, (iii) extend the Schrodinger formalism to cover the case of curves parametrized as line segments as opposed to point-sets, (iv) demonstrate that the Schrodinger formalism permits the addition of wave functions — an operation that is not allowed for distance transforms, and finally (v) construct a fundamentally new Schrodinger equation and show that it can represent both the distance transform and its gradient density — not possible in earlier efforts.

Patent
27 Mar 2012
TL;DR: In this article, a system and method for varying the apparent distance of a virtual image projected by a head-mounted display is presented, where the distance between the user and an object being viewed by the user is determined.
Abstract: A system and method is provided for varying the apparent distance of a virtual image projected by a head-mounted display. The distance between the user and an object being viewed by the user is determined. Then, the focal distance of an eyepiece optical system in the head-mounted display is adjusted to provide a virtual image at an apparent distance that corresponds to the determined distance. The user thereby experiences an approximately seamless overlay in sharp focus of the virtual image from the display and the three-dimensional binocular image being viewed. For example, when viewing a movie screen, or other object at infinity, the focal length is adjusted to provide a virtual image at or near optical infinity. Likewise, when reading a book, or inspecting another object in close proximity, the focal length is adjusted to provide the virtual image at a nearby apparent distance.

Journal ArticleDOI
TL;DR: A novel background subtraction method that can work under complex environments including dynamic background and illumination variations, especially for sudden illumination change and has no bootstrapping limitations.

Journal ArticleDOI
TL;DR: A linear time algorithm for computing the approximated Hausdorff distance with lower approximation error is proposed and is effective to reduce the processing time, while minimizing the error rate in content-based image processing and analysis.
Abstract: The Hausdorff distance is a very important metric for various image applications in computer vision including image matching, moving-object detection, tracking and recognition, shape retrieval and content-based image analysis. However, no efficient algorithm has been reported that computes the exact Hausdorff distance in linear time for comparing two images. Very few methods have been proposed to compute the approximate Hausdorff distance with higher approximation error. In this paper, we propose a linear time algorithm for computing the approximated Hausdorff distance with lower approximation error. The proposed method is effective to reduce the processing time, while minimizing the error rate in content-based image processing and analysis.

Journal ArticleDOI
TL;DR: The results of the study show that NCD can be used to address some of the selected image comparison problems, but care must be taken on the compressor and image format selected.
Abstract: Similarity metrics are widely used in computer graphics. In this paper, we will concentrate on a new, algorithmic complexity-based metric called Normalized Compression Distance. It is a universal distance used to compare strings. This measure has also been used in computer graphics for image registration or viewpoint selection. However, there is no previous study on how the measure should be used: which compressor and image format are the most suitable. This paper presents a practical study of the Normalized Compression Distance (NCD) applied to color images. The questions we try to answer are: Is NCD a suitable metric for image comparison? How robust is it to rotation, translation, and scaling? Which are the most adequate image formats and compression algorithms? The results of our study show that NCD can be used to address some of the selected image comparison problems, but care must be taken on the compressor and image format selected.

Journal ArticleDOI
TL;DR: In this article, a hybrid chromatic distance inspired in the human vision system is used to shift from the chromatic to the greyscale distance depending on the pixel's luminance value.
Abstract: This study presents an image segmentation algorithm working on the spherical coordinates of RGB colour space. The algorithm uses a hybrid chromatic distance inspired in the human vision system which shifts from the chromatic to the greyscale distance depending on the pixel's luminance value. In dark areas of the image the chromatic distance is too sensitive to image noise, so that the greyscale distance is used instead. Colour constancy properties of this segmentation approach follow from the dichromatic reflection model. The approach is strongly robust regarding highlights and dark spots and does not need illuminant source colour normalisation. The authors give results on public benchmark image databases and robot camera images. A public implementation is made available for independent test of the algorithm image segmentation results.

Book ChapterDOI
Xiaowu Chen1, Qing Li1, Yafei Song1, Xin Jin1, Qinping Zhao1 
07 Oct 2012
TL;DR: Experiments show that the novel semantic label transfer method using supervised geodesic propagation outperforms the traditional learning based methods and the previouslabel transfer method for the semantic segmentation work.
Abstract: In this paper we propose a novel semantic label transfer method using supervised geodesic propagation (SGP). We use supervised learning to guide the seed selection and the label propagation. Given an input image, we first retrieve its similar image set from annotated databases. A Joint Boost model is learned on the similar image set of the input image. Then the recognition proposal map of the input image is inferred by this learned model. The initial distance map is defined by the proposal map: the higher probability, the smaller distance. In each iteration step of the geodesic propagation, the seed is selected as the one with the smallest distance from the undetermined superpixels. We learn a classifier as an indicator to indicate whether to propagate labels between two neighboring superpixels. The training samples of the indicator are annotated neighboring pairs from the similar image set. The geodesic distances of its neighbors are updated according to the combination of the texture and boundary features and the indication value. Experiments on three datasets show that our method outperforms the traditional learning based methods and the previous label transfer method for the semantic segmentation work.

Patent
20 Sep 2012
TL;DR: In this paper, a wavelet transformation is used to produce a transformed reference image and a transformed source image, and then the transformed reference images and the transformed source images are used to estimate affine transform parameters.
Abstract: An image registration method includes: providing a reference image and a source image; using a wavelet transformation to produce a transformed reference image and a transformed source image; using the transformed reference image and the transformed source image to estimate affine transform parameters; using the reference image, the source image, and the affine transform estimates to maximize normalized mutual information between the reference image and the source image; and using the normalized mutual information to perform sub-pixel geo-spatial registration of the reference image and the source image to produce an output image. An apparatus that performs the method is also provided.

Patent
06 Dec 2012
TL;DR: In this paper, a series of quick distance calculations can be performed between an unknown input image and a reference image, including facial detection, normalization, discrete cosine transform calculations, and threshold comparisons to determine whether an image is recognized.
Abstract: Image comparison techniques allow a quick method of recognizing and identifying faces or other objects appearing in images. A series of quick distance calculations can be performed between an unknown input image and a reference image. These calculations may include facial detection, normalization, discrete cosine transform calculations, and threshold comparisons to determine whether an image is recognized. In the case of identification uncertainty, slower but more precise motion aligned distance calculations are initiated. Motion aligned distance calculations involve generating a set of downscaled images, determining motion field and motion field-based distances between an unknown input image and reference image, best scale factors for aligning an unknown input image with reference images, and calculating affine transformation matrices to modify and align an unknown input image with reference images.

Patent
Meir Tzur1
09 Jan 2012
TL;DR: In this paper, the scene is divided into regions, and the depth map represents region depths corresponding to a particular focus step, and entries having a specific focus step value are placed into a histogram, and depths having the most entries are selected as the principal depths.
Abstract: A system, method, and computer program product for capturing images for later refocusing. Embodiments estimate a distance map for a scene, determine a number of principal depths, capture a set of images, with each image focused at one of the principal depths, and process captured images to produce an output image. The scene is divided into regions, and the depth map represents region depths corresponding to a particular focus step. Entries having a specific focus step value are placed into a histogram, and depths having the most entries are selected as the principal depths. Embodiments may also identify scene areas having important objects and include different important object depths in the principal depths. Captured images may be selected according to user input, aligned, and then combined using blending functions that favor only scene regions that are focused in particular captured images.

01 Jan 2012
TL;DR: In this paper, two new approaches of shape feature generation based on plant silhouettes were proposed for improving plant seedling recognition by using a high-degree Legendre polynomial.
Abstract: The aim of this research is an improvement of plant seedling recognition by two new approaches of shape feature generation based on plant silhouettes. Experiments show that the proposed feature sets possess value in plant recognition when compared with other feature sets. Both methods approximate a distance distribution of an object, either by resampling or by approximation of the distribution with a high degree Legendre polynomial. In the latter case, the polynomial coefficients constitute a feature set. The methods have been tested through a discrimination process where two similar plant species are to be distinguished into their respective classes. The used performance assessment is based on the classification accuracy of 4 different classifiers (a k-Nearest Neighbor, Naive-Bayes, Linear Support Vector Machine, Nonlinear Support Vector Machine). Another set of 21 well-known shape features described in the literature is used for comparison. The used data consisted of 139 samples of cornflower (Centaura cyanus L.) and 63 samples of nightshade (Solanum nigrum L.). The highest discrimination accuracy was achieved with the Legendre Polynomial feature set and amounted to 97.5%. This feature set consisted of 10 numerical values. Another feature set consisting of 21 common features achieved an accuracy of 92.5%. The results suggest that the Legendre Polynomial feature set can compete with or outperform the commonly used feature sets.

Patent
06 Jun 2012
TL;DR: A hardness tester includes an image pickup control section, an indentation region extraction section, a indentation apex extraction section and a hardness calculation section as discussed by the authors, which is used to determine whether an indenter region candidate is extracted and when determining that the candidate is not extracted.
Abstract: A hardness tester includes an image pickup control section, an indentation region extraction section, an indentation apex extraction section and a hardness calculation section The image pickup control section obtains picked-up image data of a sample's surface The region extraction section binarizes the image data, determines based on the binarized image data whether an indentation region candidate is extracted, and when determining that the candidate is not extracted, obtains curvature image data, binarizes the curvature image data, erodes and dilates the binarized curvature image data, performs distance transform on the eroded-and-dilated curvature image data, and extracts a closed region corresponding to the indenter's shape using the distance-transformed curvature image data The apex extraction section extracts an indentation-measurement-use apex based on the closed region The hardness calculation section calculates the sample's hardness based on the apex

Proceedings ArticleDOI
02 May 2012
TL;DR: This work compares different approaches to implementing an efficient representation of a polygonal mesh by a signed continuous distance field and chooses the most efficient one.
Abstract: An efficient representation of a polygonal mesh by a signed continuous distance field is the main focus of this work. We compare different approaches to implementing such a representation and choose the most efficient one. Several optimizations to the existing methods are presented including the new traversal technique for the BVH-based signed distance evaluation and packet queries. We also discuss details of the GPU implementation of the efficient signed distance evaluation. Several application examples are presented such as blending set operations, linear metamorphosis, space-time blending and microstructure generation for polygonal meshes.

Patent
31 Aug 2012
TL;DR: In this article, the authors present a departure/arrival time prediction map generation section 52, a departure-time-zone-specific predicted travel distance map for predicting the longest travel distance in each departure time zone, by using the travel distance information and predicted departure time information.
Abstract: An electronic control unit 11 of a charge/discharge assist device 10 has a data input section 51, a departure/arrival time prediction map generation section 52, a travel distance prediction map generation section 53, a rule curve creation section 54, and a data output section 55. The input section 51 inputs departure time information, arrival time information, and travel distance information. The prediction map generation section 52 creates a map for providing a predicted departure time in future and a map for providing a predicted arrival time in future. The prediction map generation section 53 creates a departure-time-zone-specific predicted travel distance map for predicting the longest travel distance in each departure time zone, by using the travel distance information and predicted departure time information. On the basis of the predicted travel distance map, the creation section 54 determines an electric energy required in each time zone and creates a rule curve. The output section 55 creates a charge plan or a discharge plan by using the rule curve and outputs the plan.

Journal ArticleDOI
TL;DR: A simple but effective image transform, called the epipolar distance transform, that converts image intensity values to a relative location inside a planar segment along the ep bipolar line, such that pixels in the low-texture regions become distinguishable.
Abstract: In this paper, we propose a simple but effective image transform, called the epipolar distance transform, for matching low-texture regions. It converts image intensity values to a relative location inside a planar segment along the epipolar line, such that pixels in the low-texture regions become distinguishable. We theoretically prove that the transform is affine invariant, thus the transformed images can be directly used for stereo matching. Any existing stereo algorithms can be directly used with the transformed images to improve reconstruction accuracy for low-texture regions. Results on real indoor and outdoor images demonstrate the effectiveness of the proposed transform for matching low-texture regions, keypoint detection, and description for low-texture scenes. Our experimental results on Middlebury images also demonstrate the robustness of our transform for highly textured scenes. The proposed transform has a great advantage, its low computational complexity. It was tested on a MacBook Air laptop computer with a 1.8 GHz Core i7 processor, with a speed of about 9 frames per second for a video graphics array-sized image.

Book ChapterDOI
12 Nov 2012
TL;DR: An algorithm that computes the Local Patch Dissimilarity between two images is presented, and experiments show that the extension of rank distance to images has very good results in image classification, more precisely in handwritten digit recognition.
Abstract: This paper aims to introduce a new distance measure for images, called Local Patch Dissimilarity. This new distance measure is inspired from rank distance which is a distance measure for strings. The distance measure introduced in this paper is based on patches. There are many other patch-based techniques used in image processing. Patches contain contextual information and have advantages in terms of generalization. An algorithm that computes the Local Patch Dissimilarity between two images is presented in this work. Experiments show that the extension of rank distance to images has very good results in image classification, more precisely in handwritten digit recognition.

Journal ArticleDOI
TL;DR: A patchwise scaling method to resize an image to emphasize the important areas and preserve the globally visual effect (smoothness, coherence and integrity) based on optimizing the image distance presented in this paper.

Journal ArticleDOI
TL;DR: The result confirms that the proposed algorithm is found to yield satisfactory and efficient segmentation of the digital images for edge detection and watershed segmentation algorithm using distance transform.
Abstract: An edge detection algorithm for digital images is proposed in this paper. Edge detection is one of the important and most difficult tasks in image processing and analysis. In images edges can create major variation in the picture quality where edges are areas with strong intensity contrasts. Edges in digital images are areas with strong intensity contrasts and a jump in intensity from one pixel to the next can create major variation in the picture quality. This paper proposed an effective edge detection algorithm based morphological edge detectors and watershed segmentation algorithm using distance transform. The result confirms that the proposed algorithm is found to yield satisfactory and efficient segmentation of the digital images for edge detection. Experimental result presented in this paper is obtained by using MATLAB.

Patent
26 Jul 2012
TL;DR: In this article, a projector includes an image taking part that takes an image of an area that includes a target onto which an image is projected, a distance measuring part that calculates, from taken image data obtained by the image-taking part, distance data concerning a distance between the target and the image, and a plane estimation part that estimates, from the distance data, a plane corresponding to the target; a focusing adjustment part that adjusts focusing of the image to be projected, based on information concerning the plane.
Abstract: A projector includes an image taking part that takes an image of an area that includes a target onto which an image is projected; a distance measuring part that calculates, from taken image data obtained by the image taking part, distance data concerning a distance between the target and the image taking part; a plane estimation part that estimates, from the distance data, a plane corresponding to the target; and a focusing adjustment part that adjusts focusing of the image to be projected, based on information concerning the plane.