scispace - formally typeset
Search or ask a question

Showing papers on "Image plane published in 2010"


Journal ArticleDOI
TL;DR: An improved version of CS-based high-resolution imaging to overcome strong noise and clutter by combining coherent projectors and weighting with the CS optimization for ISAR image generation is presented.
Abstract: The theory of compressed sampling (CS) indicates that exact recovery of an unknown sparse signal can be achieved from very limited samples. For inversed synthetic aperture radar (ISAR), the image of a target is usually constructed by strong scattering centers whose number is much smaller than that of pixels of an image plane. This sparsity of the ISAR signal intrinsically paves a way to apply CS to the reconstruction of high-resolution ISAR imagery. CS-based high-resolution ISAR imaging with limited pulses is developed, and it performs well in the case of high signal-to-noise ratios. However, strong noise and clutter are usually inevitable in radar imaging, which challenges current high-resolution imaging approaches based on parametric modeling, including the CS-based approach. In this paper, we present an improved version of CS-based high-resolution imaging to overcome strong noise and clutter by combining coherent projectors and weighting with the CS optimization for ISAR image generation. Real data are used to test the robustness of the improved CS imaging compared with other current techniques. Experimental results show that the approach is capable of precise estimation of scattering centers and effective suppression of noise.

268 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: Two contributions are made: a local blur cue that measures the likelihood of a small neighborhood being blurred by a candidate blur kernel; and an algorithm that, given an image, simultaneously selects a motion blur kernel and segments the region that it affects.
Abstract: Blur is caused by a pixel receiving light from multiple scene points, and in many cases, such as object motion, the induced blur varies spatially across the image plane. However, the seemingly straight-forward task of estimating spatially-varying blur from a single image has proved hard to accomplish reliably. This work considers such blur and makes two contributions: a local blur cue that measures the likelihood of a small neighborhood being blurred by a candidate blur kernel; and an algorithm that, given an image, simultaneously selects a motion blur kernel and segments the region that it affects. The methods are shown to perform well on a diversity of images.

203 citations


Journal ArticleDOI
TL;DR: Simulations that were obtained with a 6-degree-of-freedom (DOF) free-flying camera highlight the potential advantages of the proposed approach with respect to the image prediction and the constraint handling.
Abstract: This paper deals with the image-based visual servoing (IBVS), subject to constraints. Robot workspace limitations, visibility constraints, and actuators limitations are addressed. These constraints are formulated into state, output, and input constraints, respectively. Based on the predictive-control strategy, the IBVS task is written into a nonlinear optimization problem in the image plane, where the constraints can be easily and explicitly taken into account. Second, the contribution of the image prediction and influence of the prediction horizon are pointed out. The image prediction is obtained due to a model. The latter can be a local model based on the interaction matrix or a nonlinear global model based on 3-D data. Its choice is discussed with respect to the constraints to be handled. Finally, simulations that were obtained with a 6-degree-of-freedom (DOF) free-flying camera highlight the potential advantages of the proposed approach with respect to the image prediction and the constraint handling.

180 citations


Patent
30 Nov 2010
TL;DR: In this paper, a super-resolved demosaicing technique for rendering focused plenoptic camera data performs simultaneous super-resolution and demosaice. But the technique is limited to a single image at a specified depth of focus.
Abstract: A super-resolved demosaicing technique for rendering focused plenoptic camera data performs simultaneous super-resolution and demosaicing. The technique renders a high-resolution output image from a plurality of separate microimages in an input image at a specified depth of focus. For each point on an image plane of the output image, the technique determines a line of projection through the microimages in optical phase space according to the current point and angle of projection determined from the depth of focus. For each microimage, the technique applies a kernel centered at a position on the current microimage intersected by the line of projection to accumulate, from pixels at each microimage covered by the kernel at the respective position, values for each color channel weighted according to the kernel. A value for a pixel at the current point in the output image is computed from the accumulated values for the color channels.

130 citations


Patent
Shibazaki Yuichi1
07 Oct 2010
TL;DR: In this paper, a transition from a first state to a second state in a state where liquid is supplied in the space between the projection optical system and the specific stage directly under it is described.
Abstract: When a transition from a first state where one stage is positioned at a first area directly below projection optical system to which liquid is supplied to a state where the other stage is positioned at the first area, both stages are simultaneously driven while a state where both stages are close together in the X-axis direction is maintained. Therefore, it becomes possible to make a transition from the first state to the second state in a state where liquid is supplied in the space between the projection optical system and the specific stage directly under the projection optical system. Accordingly, the time from the completion of exposure operation on one stage side until the exposure operation begins on the other stage side can be reduced, which allows processing with high throughput. Further, because the liquid can constantly exist on the image plane side of the projection optical system, generation of water marks on optical members of the projection optical system on the image plane side is prevented.

111 citations


Journal ArticleDOI
TL;DR: An experimental configuration for phase retrieval from a set of intensity measurements using a spatial light modulator located in the Fourier domain of an imaging system that performs a linear filter operation associated to the process of propagation in the image plane.
Abstract: We present an experimental configuration for phase retrieval from a set of intensity measurements. The key component is a spatial light modulator located in the Fourier domain of an imaging system. It performs a linear filter operation that is associated to the process of propagation in the image plane. In contrast to the state of the art, no mechanical adjustment is required during the recording process, thus reducing the measurement time considerably. The method is experimentally demonstrated by investigating a wave field scattered by a diffuser, and the results are verified by comparing them to those obtained from standard interferometry.

95 citations


Journal ArticleDOI
TL;DR: A completely numerical method, named digital self-referencing holography, is described to easily accomplish a quantitative phase microscopy for microfluidic devices by a digital holographic microscope.
Abstract: A completely numerical method, named digital self-referencing holography, is described to easily accomplish a quantitative phase microscopy for microfluidic devices by a digital holographic microscope. The approach works through an appropriate numerical manipulation of the retrieved complex wavefront. The self-referencing is obtained by folding the retrieved wavefront in the image plane. The folding operation allows us to obtain the correct phase map by subtracting from the complex region of interest a flat area outside the microfluidic channel. To demonstrate the effectiveness of the method, quantitative phase maps of bovine spermatozoa and in vitro cells are retrieved.

80 citations


Patent
02 Dec 2010
TL;DR: In this paper, a variable phase controlling system is used to adjust the relative phase of the scattered component and the specular component so as to change the way they interfere at the image plane.
Abstract: Systems and methods for using common-path interferometric imaging for defect detection and classification are described. An illumination source generates and directs coherent light toward the sample. An optical imaging system collects light reflected or transmitted from the sample including a scattered component and a specular component that is predominantly undiffracted by the sample. A variable phase controlling system is used to adjust the relative phase of the scattered component and the specular component so as to change the way they interfere at the image plane. The resultant signal is compared to a reference signal for the same location on the sample and a difference above threshold is considered to be a defect. The process is repeated multiple times each with a different relative phase shift and each defect location and the difference signals are stored in memory. This data is then used to calculate an amplitude and phase for each defect, which can be used for defect detection and classification. This method is expected to detect much smaller defects than current inspection systems and to find defects that are transparent to these systems.

68 citations


Proceedings ArticleDOI
19 Feb 2010
TL;DR: This work proposes a rendering technique for textured light sources in single-scattering media, that draws from the concept of epipolar geometry to place samples in image space, and shows that this method is very simple to implement on the GPU, yields high quality images, and achieves high frame rates.
Abstract: Scattering in participating media, such as fog or haze, generates volumetric lighting effects known as crepuscular or god rays Rendering such effects greatly enhances the realism in virtual scenes, but is inherently costly as scattering events occur at every point in space and thus it requires costly integration of the light scattered towards the observer This is typically done using ray marching which is too expensive for every pixel on the screen for interactive applications We propose a rendering technique for textured light sources in single-scattering media, that draws from the concept of epipolar geometry to place samples in image space: the inscattered light varies orthogonally to crepuscular rays, but mostly smoothly along these rays These are epipolar lines of a plane of light rays that projects onto one line on the image plane Our method samples sparsely along epipolar lines and interpolates between samples where adequate, but preserves high frequency details that are caused by shadowing of light rays We show that our method is very simple to implement on the GPU, yields high quality images, and achieves high frame rates

64 citations


Journal ArticleDOI
TL;DR: A novel approach to color image segmentation (CIS) in scanned archival topographic maps of the 19th century is presented with robust results as derived from an accuracy assessment.
Abstract: A novel approach to color image segmentation (CIS) in scanned archival topographic maps of the 19th century is presented. Archival maps provide unique information for GIS-based change detection and are the only spatially contiguous data sources prior to the establishment of remote sensing. Processing such documents is challenging due to their very low graphical quality caused by ageing, manual production and scanning. Typical artifacts are high degrees of mixed and false coloring, as well as blurring in the images. Existing approaches for segmentation in cartographic documents are normally presented using well-conditioned maps. The CIS approach presented here uses information from the local image plane, the frequency domain and color space. As a first step, iterative clustering is based on local homogeneity, frequency of homogeneity-tested pixels and similarity. By defining a peak-finding rule, "hidden" color layer prototypes can be identified without prior knowledge. Based on these prototypes a constrained seeded region growing (SRG) process is carried out to find connected regions of color layers using color similarity and spatial connectivity. The method was tested on map pages with different graphical properties with robust results as derived from an accuracy assessment.

55 citations


Book ChapterDOI
29 Nov 2010
TL;DR: This paper presents a novel method to quickly, accurately and simultaneously estimate three orthogonal vanishing points (TOVPs) and focal length from single images, which decomposes a 2D Hough parameter space into two cascaded 1DHough parameter spaces, which makes the method much faster and more robust than previous methods without losing accuracy.
Abstract: For images taken in man-made scenes, vanishing points and focal length of camera play important roles in scene understanding. In this paper, we present a novel method to quickly, accurately and simultaneously estimate three orthogonal vanishing points (TOVPs) and focal length from single images. Our method is based on the following important observations: If we establish a polar coordinate system on the image plane whose origin is at the image center, angle coordinates of vanishing points can be robustly estimated by seeking peaks in a histogram. From the detected angle coordinates, altitudes of a triangle formed by TOVPs are determined. Novel constraints on both vanishing points and focal length could be obtained from the three altitudes. By using the constraints, radial coordinates of TOVPs and focal length can be estimated simultaneously. Our method decomposes a 2D Hough parameter space into two cascaded 1D Hough parameter spaces, which makes our method much faster and more robust than previous methods without losing accuracy. Enormous experiments on real images have been done to test feasibility and correctness of our method.

Journal ArticleDOI
TL;DR: Estimation of the distortion centre of an equidistant fish-eye camera can be estimated by the extraction of the vanishing points using an equation that describes the projection of a straight line, and it is demonstrated how the shape of a projected straight line can be accurately described by arcs of circles on the distorted image plane.

Journal ArticleDOI
TL;DR: This paper studies the shape from specular flow problem and shows that observable specular Flow is directly related to surface shape through a nonlinear partial differential equation that has the key property of depending only on the relative motion of the environment while being independent of its content.
Abstract: An image of a specular (mirror-like) object is nothing but a distorted reflection of its environment. When the environment is unknown, reconstructing shape from such an image can be very difficult. This reconstruction task can be made tractable when, instead of a single image, one observes relative motion between the specular object and its environment, and therefore, a motion field-or specular flow-in the image plane. In this paper, we study the shape from specular flow problem and show that observable specular flow is directly related to surface shape through a nonlinear partial differential equation. This equation has the key property of depending only on the relative motion of the environment while being independent of its content. We take first steps toward understanding and exploiting this PDE, and we examine its qualitative properties in relation to shape geometry. We analyze several cases in which the surface shape can be recovered in closed form, and we show that, under certain conditions, specular shape can be reconstructed when both the relative motion and the content of the environment are unknown. We discuss numerical issues related to the proposed reconstruction algorithms, and we validate our findings using both real and synthetic data.

Journal ArticleDOI
TL;DR: A set of mathematical correction methods applied to the acquired data stacks to correct for movement in both directions of the image plane are applied to correct experimental data taken from in-vivo optical projection tomography experiments in Caenorhabditis elegans.
Abstract: The application of optical projection tomography to in-vivo experiments is limited by specimen movement during the acquisition. We present a set of mathematical correction methods applied to the acquired data stacks to correct for movement in both directions of the image plane. These methods have been applied to correct experimental data taken from in-vivo optical projection tomography experiments in Caenorhabditis elegans. Successful reconstructions for both fluorescence and white light (absorption) measurements are shown. Since no difference between movement of the animal and movement of the rotation axis is made, this approach at the same time removes artifacts due to mechanical drifts and errors in the assumed center of rotation.

Journal ArticleDOI
TL;DR: This paper presents a single-camera approach that can simultaneously measure both 3D translation and rotation of a planar target attached on a structure and shows that the proposed monocular videogrammetric technique is a simple and effective alternative method to measure 3Dtranslation and rotation for civil engineering structures.
Abstract: Measuring displacement for large-scale structures has always been an important yet challenging task. In most applications, it is not feasible to provide a stationary platform at the location where its displacements need to be measured. Recently, image-based technique for three-dimensional (3D) displacement measurement has been developed and proven to be applicable to civil engineering structures. Most of these developments, however, use two or more cameras and require sophisticated calibration using a total station. In this paper, we present a single-camera approach that can simultaneously measure both 3D translation and rotation of a planar target attached on a structure. The intrinsic parameters of the camera are first obtained using a planar calibration board arbitrarily positioned around the target location. The obtained intrinsic parameters establish the relationship between the 3D camera coordinates and the two-dimensional image coordinates. These parameters can then be used to extract the rotation and translation of the planar target using recorded image sequence. The proposed technique is illustrated using two laboratory tests and one field test. Results show that the proposed monocular videogrammetric technique is a simple and effective alternative method to measure 3D translation and rotation for civil engineering structures. It should be noted that the proposed technique cannot measure translation along the direction perpendicular to the image plane. Hence, proper caution should be taken when placing target and camera.

Journal ArticleDOI
TL;DR: Depth discrimination performance improved significantly when focus information was correct, which shows that the visual system utilizes the information contained in depth-of-field blur in solving binocular correspondence.
Abstract: Focus information-blur and accommodation-is highly correlated with depth in natural viewing. We examined the use of focus information in solving the binocular correspondence problem and in interpreting monocular occlusions. We presented transparent scenes consisting of two planes. Observers judged the slant of the farther plane, which was seen through the nearer plane. To do this, they had to solve the correspondence problem. In one condition, the two planes were presented with sharp rendering on one image plane, as is done in conventional stereo displays. In another condition, the planes were presented on two image planes at different focal distances, simulating focus information in natural viewing. Depth discrimination performance improved significantly when focus information was correct, which shows that the visual system utilizes the information contained in depth-of-field blur in solving binocular correspondence. In a second experiment, we presented images in which one eye could see texture behind an occluder that the other eye could not see. When the occluder's texture was sharp along with the occluded texture, binocular rivalry was prominent. When the occluded and occluding textures were presented with different blurs, rivalry was significantly reduced. This shows that blur aids the interpretation of scene layout near monocular occlusions.

Patent
06 Jul 2010
TL;DR: In this article, the authors propose a stabilizing camera mounted on a fixed base with a fixed holder for holding the mobile device during image capture, and a fixed fixed base integral with the holder and lying in a base plane perpendicular to the image plane when the base is supported by the support surface in a supported orientation.
Abstract: A stabilized mount (10) stably supports a motion-sensitive, image capture device (12) , such as a cellular (mobile) telephone or a personal digital assistant, on a support surface (14) such as a tripod or analogous camera equipment. The device (12) is operative for capturing an image over a field of view along an optical axis perpendicular to an image plane. The mount (10) includes a holder (16) for holding the device (12) during image capture, and a fixed base (18) integral with the holder (16) and lying in a base plane perpendicular to the image plane when the base (18) is supported by the support surface (14) in a supported orientation. The base (18) is operative for steadily positioning the holder (16) and the device (12) on the support surface (14) in the supported orientation during the image capture.

Patent
27 Jul 2010
TL;DR: In this article, an image display device for optically displaying an image is disclosed. This device includes: a light source, an imaging-light generator converting light emitted from the light source into imaging light representative of the image to be displayed, to thereby generate the imaging light; a relay optical system focusing the image light from the image generator, on an image plane which is located at an optically conjugate position to the imaging generator; a variable-focus lens disposed at a position generally coincident with the pupil, having a varying focal length.
Abstract: An image display device for optically displaying an image is disclosed. This device includes: a light source; an imaging-light generator converting light emitted from the light source, into imaging light representative of the image to be displayed, to thereby generate the imaging light; a relay optical system focusing the imaging light emitted from the imaging-light generator, on an image plane which is located at an optically conjugate position to the imaging-light generator, the relay optical system defining a pupil through which the imaging light passes, within the relay optical system; a variable-focus lens disposed at a position generally coincident with the pupil, the variable-focus lens having a varying focal length; and a wavefront-curvature adjuster configured to vary the focal length by operating the variable-focus lens, to thereby adjust a wavefront curvature of the imaging light emitted from the relay optical system.

Journal ArticleDOI
TL;DR: A fully automated micrograsping methodology that uses a micro-robot and a microgripper to automatically grasp a micropart in three-dimensional (3-D) space is presented.
Abstract: We present a fully automated micrograsping methodology that uses a micro-robot and a microgripper to automatically grasp a micropart in three-dimensional (3-D) space. To accurately grasp a micropart in 3-D space, we propose a three-stage micrograsping strategy: (i) coarse alignment of a micropart with a microgripper in the image plane of a video camera system; (ii) alignment of the micropart with the microgripper in the direction normal to the image plane; (iii) fine alignment of the micropart with the microgripper in the image plane, until the micropart is completely grasped. Two different vision-based feedback controllers are employed to perform the coarse and fine alignment in the image plane. The vision-based feedback controller used for the fine alignment employs position feedback signals obtained from two special patterns, which can achieve submicron alignment accuracy. Fully automated micrograsping experiments are conducted on a microassembly robot. The experimental results show that the average alignment accuracy achieved during automated grasping is approximately ± 0.07 μm; the time to complete an automated micrograsping task is as short as 7.9 seconds; and the success rate is as high as 94%.

Patent
24 May 2010
TL;DR: In this article, a touchless input device for a computer replaces a computer mouse and does not require physical contact between the user-operator and any part of the input device, such as a finger.
Abstract: A touchless input device for a computer replaces a computer mouse and does not require physical contact between the user-operator and any part of the input device. The touchless input device uses multiple, linear near infrared, optical sensors and multiple near infrared light emitters working in a plane in space, all held inside a frame with an opening that defines the detection region. The device images the plane and processes the images to determine the presence, location and velocity of objects in the plane. The operator introduces an object, such as a finger, into the plane and moves the object in the plane to emulate the motion of a computer mouse across a desktop. Mouse buttons and other functions are emulated by unique motions in the plane. The device communicates these motions and events to the computer typically using a Universal Serial Bus.

Journal ArticleDOI
TL;DR: This work describes structured surface error profiles and effects on the image point-spread function using harmonic (Fourier) decomposition.
Abstract: Optical design and tolerancing of aspheric or free-form surfaces require attention to surface form, structured surface errors, and nonstructured errors. We describe structured surface error profiles and effects on the image point-spread function using harmonic (Fourier) decomposition. Surface errors over the beam footprint map onto the pupil, where multiple structured surface frequencies mix to create sum and difference diffraction orders in the image plane at each field point. Difference frequencies widen the central lobe of the point-spread function and summation frequencies create ghost images.

Book ChapterDOI
05 Sep 2010
TL;DR: A video understanding system for scene elements that are characterized more by qualitative activities and geometry than by intrinsic appearance is developed, and reasoning about scene geometry, occlusions and common sense domain knowledge using a set of meta-rules is explained.
Abstract: We develop a video understanding system for scene elements, such as bus stops, crosswalks, and intersections, that are characterized more by qualitative activities and geometry than by intrinsic appearance. The domain models for scene elements are not learned from a corpus of video, but instead, naturally elicited by humans, and represented as probabilistic logic rules within a Markov Logic Network framework. Human elicited models, however, represent object interactions as they occur in the 3D world rather than describing their appearance projection in some specific 2D image plane. We bridge this gap by recovering qualitative scene geometry to analyze object interactions in the 3D world and then reasoning about scene geometry, occlusions and common sense domain knowledge using a set of meta-rules. The effectiveness of this approach is demonstrated on a set of videos of public spaces.

Patent
Susumu Fujioka1
27 May 2010
TL;DR: In this paper, an information inputting device is provided, which includes a plurality of photographing units photographing an area on a plane, and an object located on the plane is then extracted from an image that includes the plane and the object, and it is determined whether the object is a specific object.
Abstract: An information-inputting device is provided, which includes a plurality of photographing units photographing an area on a plane. An object located on the plane is then extracted from an image that includes the plane and the object, and it is determined whether the object is a specific object. If the object is the specific object, a position of a contact point between the specific object and the plane is calculated.

Patent
12 May 2010
TL;DR: In this paper, a position and orientation estimation apparatus detects correspondence between a real image obtained by an imaging apparatus by imaging a target object to be observed and a rendered image, and then calculates a relative positioning and orientation of the imaging apparatus and the target object in the rendered image based on the correspondence.
Abstract: A position and orientation estimation apparatus detects correspondence between a real image obtained by an imaging apparatus by imaging a target object to be observed and a rendered image. The rendered image is generated by projecting a three dimensional model onto an image plane based on three dimensional model data expressing the shape and surface information of the target object, and position and orientation information of the imaging apparatus. The position and orientation estimation apparatus then calculates a relative position and orientation of the imaging apparatus and the target object to be observed based on the correspondence. Then, the surface information of the three dimensional model data is updated by associating image information of the target object to be observed in the real image with the surface information of the three dimensional model data, based on the calculated positions and orientations.

Patent
04 Feb 2010
TL;DR: In this article, a mapping method is provided, where the environment is scanned to obtain depth information of environmental obstacles and the depth information is projected onto the image plane, so as to obtain projection positions.
Abstract: A mapping method is provided. The environment is scanned to obtain depth information of environmental obstacles. The image of the environment is captured to generate an image plane. The depth information of environmental obstacles is projected onto the image plane, so as to obtain projection positions. At least one feature vector is calculated from a predetermined range around each projection position. The environmental obstacle depth information and the environmental feature vector are merged to generate a sub-map at a certain time point. Sub-maps at all time points are combined to generate a map. In addition, a localization method using the map is also provided.

Patent
03 Dec 2010
TL;DR: An optical unit that is configured of a three-group wide-angle lens, that is capable of suppressing optical distortion to a small amount, that has favorable optical characteristics, and that is tolerant of reflow, and an image pickup apparatus are provided as mentioned in this paper.
Abstract: An optical unit that is configured of a three-group wide-angle lens, that is capable of suppressing optical distortion to a small amount, that has favorable optical characteristics, and that is tolerant of reflow, and an image pickup apparatus are provided. An optical unit 100 includes: a first lens group 110 including a first lens element 111 ; a second lens group 120 including a second lens element 121 , a first transparent member 122 , and a third lens element 123 that are arranged in order from object plane toward image plane; and a third lens group 130 including a fourth lens element 131 , a second transparent member 132 , and a fifth lens element 33 that are arranged in order from the object plane toward the image plane, the first lens group 110 , the second lens group 120 , and the third lens group 130 being arranged in order from the object plane toward the image plane.

Patent
10 Aug 2010
TL;DR: In this paper, a multi-spectral camera consisting of a blocking element (201) having at least one hole (203) allowing light to pass through is presented, and a dispersive element (205) spreads light from the blocking element in different wavelength dependent directions.
Abstract: A multi-spectral camera comprises a blocking element (201) having at least one hole (203) allowing light to pass through. A dispersive element (205) spreads light from the at least one hole (203) in different wavelength dependent directions and a lens (207) focuses light from the dispersive element (205) on an image plane (209). A microlens array (211) receives light from the lens (207) and an image sensor (213) receives the light from the microlens array (211) and generates a pixel value signal which comprises incident light values for the pixels of the image sensor (213). A processor then generates a multi-spectral image from the pixel value signal. The approach may allow a single instantaneous sensor measurement to provide a multi-spectral image comprising at least one spatial dimension and one spectral dimension. The multi- spectral image may be generated by post-processing of the sensor output and no physical filtering or moving parts are necessary.

Proceedings ArticleDOI
03 Dec 2010
TL;DR: Practical experimental results of a wheeled mobile robot show that the proposed imaging system successfully estimates distance of objects and avoid obstacles in an indoor environment.
Abstract: This paper presents a vision-based obstacle avoidance design using a monocular camera onboard a mobile robot. An image processing procedure is developed to estimate distances between the robot and obstacles based-on inverse perspective transformation (IPT) in image plane. A robust image processing solution is proposed to detect and segment navigatable ground plane area within the camera view. The proposed method integrate robust feature matching with adaptive color segmentation for plane estimation and tracking to cope with variations in illumination and camera view. After IPT and ground region segmentation, a result similar to the occupancy grid map is obtained for mobile robot obstacle avoidance and navigation. Practical experimental results of a wheeled mobile robot show that the proposed imaging system successfully estimates distance of objects and avoid obstacles in an indoor environment.

Book ChapterDOI
01 Jan 2010
TL;DR: In this paper, the visual servoing task is formulated into a nonlinear optimization problem in the image plane, and a visual pre-dictive control (VPC) approach is proposed.
Abstract: In this chapter, image-based visual servoing is addressed via nonlinear model predictive control. The visual servoing task is formulated into a nonlinear optimization problem in the image plane. The proposed approach, named visual pre- dictive control, can easily and explicitly take into account 2D and 3D constraints. Furthermore, the image prediction over a finite prediction horizon plays a crucial role for large displacements. This image prediction is obtained thanks to a model. The choice of this model is discussed. A nonlinear global model and a local model based on the interaction matrix are considered. Advantages and drawbacks of both models are pointed out. Finally, simulations obtained with a 6 degrees of freedom (DOF) free-flying camera highlight the capabilities and the efficiency of the pro- posed approach by a comparison with the classical image-based visual servoing.

22 Aug 2010
TL;DR: The aim of this paper is to propose a high-capacity image steganography technique that uses pixel mapping method in integer wavelet domain with acceptable levels of imperceptibility and distortion in the cover image and high level of overall security.
Abstract: Over last two decades, due to hostilities of environment over the internet the concerns about confidentiality of information have increased at phenomenal rate. Therefore to safeguard the information from attacks, number of data/information hiding methods have evolved mostly in spatial and transformation domain.In spatial domain data hiding techniques,the information is embedded directly on the image plane itself. In transform domain data hiding techniques the image is first changed from spatial domain to some other domain and then the secret information is embedded so that the secret information remains more secure from any attack. Information hiding algorithms in time domain or spatial domain have high capacity and relatively lower robustness. In contrast, the algorithms in transform domain, such as DCT, DWT have certain robustness against some multimedia processing.In this work the authors propose a novel steganographic method for hiding information in the transform domain of the gray scale image.The proposed approach works by converting the gray level image in transform domain using discrete integer wavelet technique through lifting scheme.This approach performs a 2-D lifting wavelet decomposition through Haar lifted wavelet of the cover image and computes the approximation coefficients matrix CA and detail coefficients matrices CH, CV, and CD.Next step is to apply the PMM technique in those coefficients to form the stego image. The aim of this paper is to propose a high-capacity image steganography technique that uses pixel mapping method in integer wavelet domain with acceptable levels of imperceptibility and distortion in the cover image and high level of overall security. This solution is independent of the nature of the data to be hidden and produces a stego image with minimum degradation. Keywords—Cover Image, Pixel Mapping Method (PMM), Stego Image,Integer Wavelet Tranform.