scispace - formally typeset
Search or ask a question

Showing papers on "Image plane published in 2011"


Proceedings ArticleDOI
09 May 2011
TL;DR: This work proposes a new ‘grasping rectangle’ representation: an oriented rectangle in the image plane that takes into account the location, the orientation as well as the gripper opening width and shows that this algorithm is successfully used to pick up a variety of novel objects.
Abstract: Given an image and an aligned depth map of an object, our goal is to estimate the full 7-dimensional gripper configuration—its 3D location, 3D orientation and the gripper opening width. Recently, learning algorithms have been successfully applied to grasp novel objects—ones not seen by the robot before. While these approaches use low-dimensional representations such as a ‘grasping point’ or a ‘pair of points’ that are perhaps easier to learn, they only partly represent the gripper configuration and hence are sub-optimal. We propose to learn a new ‘grasping rectangle’ representation: an oriented rectangle in the image plane. It takes into account the location, the orientation as well as the gripper opening width. However, inference with such a representation is computationally expensive. In this work, we present a two step process in which the first step prunes the search space efficiently using certain features that are fast to compute. For the remaining few cases, the second step uses advanced features to accurately select a good grasp. In our extensive experiments, we show that our robot successfully uses our algorithm to pick up a variety of novel objects.

487 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed ISAR imaging framework is capable of precise reconstruction of ISAR images and effective suppression of both phase error and noise.
Abstract: From the theory of compressive sensing (CS), we know that the exact recovery of an unknown sparse signal can be achieved from limited measurements by solving a sparsity-constrained optimization problem. For inverse synthetic aperture radar (ISAR) imaging, the backscattering field of a target is usually composed of contributions by a very limited amount of strong scattering centers, the number of which is much smaller than that of pixels in the image plane. In this paper, a novel framework for ISAR imaging is proposed through sparse stepped-frequency waveforms (SSFWs). By using the framework, the measurements, only at some portions of frequency subbands, are used to reconstruct full-resolution images by exploiting sparsity. This waveform strategy greatly reduces the amount of data and acquisition time and improves the antijamming capability. A new algorithm, named the sparsity-driven High-Resolution Range Profile (HRRP) synthesizer, is presented in this paper to overcome the error phase due to motion usually degrading the HHRP synthesis. The sparsity-driven HRRP synthesizer is robust to noise. The main novelty of the proposed ISAR imaging framework is twofold: 1) dividing the motion compensation into three steps and therefore allowing for very accurate estimation and 2) both sparsity and signal-to-noise ratio are enhanced dramatically by coherent integrant in cross-range before performing HRRP synthesis. Both simulated and real measured data are used to test the robustness of the ISAR imaging framework with SSFWs. Experimental results show that the framework is capable of precise reconstruction of ISAR images and effective suppression of both phase error and noise.

163 citations


Patent
10 May 2011
TL;DR: In this article, an image capturing lens assembly in order from an object side to an image side comprising: a first lens group has only one first lens element with a positive refractive power, and a second lens group is assembled in order by moving the first lens groups along the optical axis.
Abstract: This invention provides an image capturing lens assembly in order from an object side to an image side comprising: a first lens group has only one first lens element with a positive refractive power, and a second lens group in order from the object side to the image side comprising: a second lens element with a negative refractive power, a third lens element, a fourth lens element and a fifth lens element; while a distance between an imaged object and the image capturing lens assembly changes from far to near, focusing is performed by moving the first lens group along the optical axis and a distance between the first lens group and an image plane changes from near to far. By such arrangement and focusing adjustment method, good image quality is achieved and less power is consumed.

160 citations


Journal ArticleDOI
TL;DR: An algorithm designed to achieve high contrast on both sides of the image plane while minimizing the stroke necessary from each deformable mirror (DM) is reviewed.
Abstract: The past decade has seen a significant growth in research targeted at space based observatories for imaging exo-solar planets. The challenge is in designing an imaging system for high-contrast. Even with a perfect coronagraph that modifies the point spread function to achieve high-contrast, wavefront sensing and control is needed to correct the errors in the optics and generate a "dark hole". The high-contrast imaging laboratory at Princeton University is equipped with two Boston Micromachines Kilo-DMs. We review here an algorithm designed to achieve high-contrast on both sides of the image plane while minimizing the stroke necessary from each deformable mirror (DM). This algorithm uses the first DM to correct for amplitude aberrations and the second DM to create a flat wavefront in the pupil plane. We then show the first results obtained at Princeton with this correction algorithm, and we demonstrate a symmetric dark hole in monochromatic light.

144 citations


Journal ArticleDOI
TL;DR: A new algorithm for calculating computer generated hologram (CGH) using ray-sampling (RS) plane is introduced that enables to reproduce high resolution image for deep 3D scene with angular reflection properties such as gloss appearance.
Abstract: We introduce a new algorithm for calculating computer generated hologram (CGH) using ray-sampling (RS) plane. RS plane is set at near the object and the light-rays emitted by the object are sampled at the plane. Then the light-rays are transformed into the wavefront with using the Fourier transforms. The wavefront on the CGH plane is calculated by wavefront propagation simulation from RS plane to CGH plane. The proposed method enables to reproduce high resolution image for deep 3D scene with angular reflection properties such as gloss appearance.

134 citations


Patent
Hans-Juergen Mann1
08 Aug 2011
TL;DR: In this paper, the authors describe a plurality of mirrors, where at least one of the mirrors has a through-hole for imaging light to pass through, and describe the components made by such systems.
Abstract: The disclosure generally relates to imaging optical systems that include a plurality of mirrors, which image an object field lying in an object plane in an image field lying in an image plane, where at least one of the mirrors has a through-hole for imaging light to pass through. The disclosure also generally relates to projection exposure installations that include such imaging optical systems, methods of using such projection exposure installations, and components made by such methods.

105 citations


Proceedings ArticleDOI
05 Aug 2011
TL;DR: This work presents a flexible and simple optimization strategy based on the idea of increasing the mutual distances by successively moving each point to the "farthest point," i.e., the location that has the maximum distance from the rest of the point set.
Abstract: Efficient sampling often relies on irregular point sets that uniformly cover the sample space. We present a flexible and simple optimization strategy for such point sets. It is based on the idea of increasing the mutual distances by successively moving each point to the "farthest point," i.e., the location that has the maximum distance from the rest of the point set. We present two iterative algorithms based on this strategy. The first is our main algorithm which distributes points in the plane. Our experimental results show that the resulting distributions have almost optimal blue noise properties and are highly suitable for image plane sampling. The second is a variant of the main algorithm that partitions any point set into equally sized subsets, each with large mutual distances; the resulting partitionings yield improved results in more general integration problems such as those occurring in physically based rendering.

104 citations


Proceedings ArticleDOI
05 Dec 2011
TL;DR: A robust-weighted extrinsic calibration algorithm that is implemented easily and has small calibration error is proposed that has calibration accuracy over 50% better than an existing state of the art approach.
Abstract: Lidar and visual imagery have been broadly utilized in computer vision and mobile robotics applications because these sensors provide complementary information. However, in order to convert data between the local coordinate systems, we must estimate the rigid body transformation between the sensors. In this paper, we propose a robust-weighted extrinsic calibration algorithm that is implemented easily and has small calibration error. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. The features are edge and centerline features on a v-shaped calibration target. The proposed algorithm contributes two ways to improve the calibration accuracy. First, we use different weights to distance between a point and a line feature according to the correspondence accuracy of the features. Second, we apply a penalizing function to exclude the influence of outliers in the calibration data sets. We conduct several experiments to evaluate the performance of our extrinsic calibration algorithm, such as comparison of the RMS distance of the ground truth and the projected points, the effect of the number of lidar scan and image, and the effect of pose and range of the calibration target. In the experiments, we show our extrinsic calibration algorithm has calibration accuracy over 50% better than an existing state of the art approach. To evaluate the generality of our algorithm, we also colorize point clouds with different pairs of lidars and cameras calibrated by our algorithm.

103 citations


Patent
19 Sep 2011
TL;DR: In this paper, methods, storage mediums, and systems for image data processing are described, including configurations to perform one or more of the following steps: background signal measurement, particle identification using classification dye emission and cluster rejection, inter-image alignment, interimage particle correlation, fluorescence integration of reporter emission, and image plane normalization.
Abstract: Methods, storage mediums, and systems for image data processing are provided. Embodiments for the methods, storage mediums, and systems include configurations to perform one or more of the following steps: background signal measurement, particle identification using classification dye emission and cluster rejection, inter-image alignment, inter-image particle correlation, fluorescence integration of reporter emission, and image plane normalization.

94 citations


Journal ArticleDOI
TL;DR: Results clearly demonstrate that the general formulae provide a robust framework for quantifying the effect of various stereo-vision parameters and image-plane matching procedures on both the bias and variance in an estimated 3D object position.
Abstract: Using the basic equations for stereo-vision with established procedures for camera calibration, the error propagation equations for determining both bias and variability in a general 3D position are provided. The results use recent theoretical developments that quantified the bias and variance in image plane positions introduced during image plane correspondence identification for a common 3D point (e.g., pattern matching during measurement process) as a basis for preliminary application of the developments for estimation of 3D position bias and variability. Extensive numerical simulations and theoretical analyses have been performed for selected stereo system configurations amenable to closed-form solution. Results clearly demonstrate that the general formulae provide a robust framework for quantifying the effect of various stereo-vision parameters and image-plane matching procedures on both the bias and variance in an estimated 3D object position.

92 citations


Book ChapterDOI
26 Sep 2011
TL;DR: A novel method for the generation of Epipolar-Image (EPI) representations of 4D Light Fields (LF) from raw data captured by a single lens "Focused Plenoptic Camera" is presented.
Abstract: In this paper, we present a novel method for the generation of Epipolar-Image (EPI) representations of 4D Light Fields (LF) from raw data captured by a single lens "Focused Plenoptic Camera". Compared to other LF representations which are usually used in the context of computational photography with Plenoptic Cameras, the EPI representation is more suitable for image analysis tasks - providing direct access to scene geometry and reflectance properties. The generation of EPIs requires a set of "all in focus" (full depth of field) images from different views of a scene. Hence, the main contribution of this paper is a novel algorithm for the rendering of such images from a single raw image captured with a Focused Plenoptic Camera. The main advantage of the proposed approach over existing full depth of field methods is that is able to cope with non-Lambertian reflectance in the scene.

Proceedings ArticleDOI
TL;DR: In this paper, a deformable mirror (DM) surface is modied with pairs of complementary shapes to create diversity in the image plane of the science camera where the intensity of the light is measured.
Abstract: In this paper we describe the complex electric field reconstruction from image plane intensity measurements for high contrast coronagraphic imaging. A deformable mirror (DM) surface is modied with pairs of complementary shapes to create diversity in the image plane of the science camera where the intensity of the light is measured. Along with the Electric Field Conjugation correction algorithm, this estimation method has been used in various high contrast imaging testbeds to achieve the best contrasts to date both in narrow and in broad band light. We present the basic methodology of estimation in easy to follow list of steps, present results from HCIT and raise several open quations we are confronted with using this method.

Journal ArticleDOI
Feng Pan1, Wen Xiao1, Shuo Liu1, Fanjing Wang1, Lu Rong1, Rui Li1 
TL;DR: By a proper averaging procedure, the coherent noise of phase contrast image is reduced significantly.
Abstract: A method to reduce coherent noise in digital holographic phase contrast microscopy is proposed. By slightly shifting the specimen, a series of digital holograms with different coherent noise patterns is recorded. Each hologram is reconstructed individually, while the different phase tilts of the reconstructed complex amplitudes due to the specimen shifts are corrected in the hologram plane by using numerical parametric lens method. Afterward, the lateral displacements of the phase maps from different holograms are compensated in the image plane by using digital image registration method. Thus, all phase images have same distribution, but uncorrelated coherent noise patterns. By a proper averaging procedure, the coherent noise of phase contrast image is reduced significantly. The experimental results are given to confirm the proposed method.

Journal ArticleDOI
TL;DR: In this article, the authors derived an exact solution (in the form of a series expansion) to compute gravitational lensing magnification maps, which is based on the backward gravitational lens mapping of a partition of the image plane in polygonal cells.
Abstract: We derive an exact solution (in the form of a series expansion) to compute gravitational lensing magnification maps. It is based on the backward gravitational lens mapping of a partition of the image plane in polygonal cells (inverse polygon mapping, IPM), not including critical points (except perhaps at the cell boundaries). The zeroth-order term of the series expansion leads to the method described by Mediavilla et al. The first-order term is used to study the error induced by the truncation of the series at zeroth order, explaining the high accuracy of the IPM even at this low order of approximation. Interpreting the Inverse Ray Shooting (IRS) method in terms of IPM, we explain the previously reported N –3/4 dependence of the IRS error with the number of collected rays per pixel. Cells intersected by critical curves (critical cells) transform to non-simply connected regions with topological pathologies like auto-overlapping or non-preservation of the boundary under the transformation. To define a non-critical partition, we use a linear approximation of the critical curve to divide each critical cell into two non-critical subcells. The optimal choice of the cell size depends basically on the curvature of the critical curves. For typical applications in which the pixel of the magnification map is a small fraction of the Einstein radius, a one-to-one relationship between the cell and pixel sizes in the absence of lensing guarantees both the consistence of the method and a very high accuracy. This prescription is simple but very conservative. We show that substantially larger cells can be used to obtain magnification maps with huge savings in computation time.

Journal ArticleDOI
TL;DR: A Talbot-Lau interferometer using two-dimensional gratings and a conventional x-ray tube has been used to investigate a phase-contrast imaging technique that is sensitive to phase gradients in two orthogonal directions and found that the choice of phase retrieval method made little difference in image blur.
Abstract: A Talbot–Lau interferometer using two-dimensional gratings and a conventional x-ray tube has been used to investigate a phase-contrast imaging technique that is sensitive to phase gradients in two orthogonal directions. Fourier analysis of Moire fringe patterns was introduced to obtain differential phase images and scattering images from a single exposure. Two-dimensional structures of plastic phantoms and characteristic features of soft tissue were clearly obtained at 17.5 keV. The phase-stepping technique was also examined to investigate the spatial resolution of different phase retrieval methods. In the presented setup we found that the choice of phase retrieval method made little difference in image blur, and a large effective source size was found to give a high intensity in the image plane.

Patent
10 Aug 2011
TL;DR: In this paper, a video monitoring system consisting of a first image pick-up system, a second image pickup system and a control system is presented, which consists of an image acquisition module, a foreground extraction module and a coordinate conversion module.
Abstract: The invention provides a video monitoring system, which comprises a first image pick-up system, a second image pick-up system and a control system. The first image pick-up system comprises one or more cameras for shooting a wide-angle video within a large-scene vision field; the second image pick-up system comprises one or more pan/tilt/zoom (PTZ) cameras for shooting a local video within the large-scene vision field; and the control system comprises an image acquisition module, a foreground extraction module and a coordinate conversion module. The image acquisition module is used for receiving the wide-angle video shot by the first image pick-up system; the foreground extraction module is used for extracting a target of interest from the wide-angle video; and the coordinate conversion module is used for converting a two-dimensional coordinate (x, y) of the projection of an arbitrary point shot by the first image pick-up system in the large-scene vision field on an image plane of the wide-angle video shot by the first image pick-up system into a vertical altitude angle theta and a horizontal azimuth angle phi when one selected PTZ camera in the second image pick-up system is aligned with the arbitrary point in a picture center by a coordinate conversion mechanism, wherein the coordinate conversion mechanism is established by at least three arbitrary points selected randomly in the large-scene vision field.

Journal ArticleDOI
TL;DR: This study is able to represent the projection of 3D points on a catadioptric image linearly with a 6×10 projection matrix, which uses lifted coordinates for image and3D points, and shows how to decompose it to obtain intrinsic and extrinsic parameters.
Abstract: In this study, we present a calibration technique that is valid for all single-viewpoint catadioptric cameras. We are able to represent the projection of 3D points on a catadioptric image linearly with a 6×10 projection matrix, which uses lifted coordinates for image and 3D points. This projection matrix can be computed from 3D---2D correspondences (minimum 20 points distributed in three different planes). We show how to decompose it to obtain intrinsic and extrinsic parameters. Moreover, we use this parameter estimation followed by a non-linear optimization to calibrate various types of cameras. Our results are based on the sphere camera model which considers that every central catadioptric system can be modeled using two projections, one from 3D points to a unitary sphere and then a perspective projection from the sphere to the image plane. We test our method both with simulations and real images, and we analyze the results performing a 3D reconstruction from two omnidirectional images.

Patent
13 Sep 2011
TL;DR: In this article, an image forming system with a user-selectable field-of-view for forming an image of a scene onto an image plane has been proposed, including a plurality of fixed focal length illumination lenses having two or more different focal lengths and one or more light emitters positioned behind each of the illumination lenses.
Abstract: A camera system having an electronic flash with a variable illumination angle, comprising: an image forming system having a user-selectable field-of-view for forming an image of a scene onto an image plane; an electronic flash system including a plurality of fixed focal length illumination lenses having two or more different focal lengths and one or more light emitters positioned behind each of the illumination lenses, the light emitters being positioned relative to their respective illumination lenses to provide two or more different illumination angles onto the scene; and a flash controller that selectively fires different subsets of the light emitters responsive to the selected field-of-view of the image forming system.

Journal ArticleDOI
TL;DR: In this article, a nonlinear observer is used to estimate the 3D motion of the object online, and the Lyapunov method is employed to prove asymptotic convergence of the image errors.
Abstract: This paper presents a new controller for locking a moving object in 3-D space at a particular position (for example, the center) on the image plane of a camera mounted on a robot by actively moving the camera. The controller is designed to cope with both the highly nonlinear robot dynamics and unknown motion of the object. Based on the fact that the unknown position of the moving object appears linearly in the closed-loop dynamics of the system if the depth-independent image Jacobian is used, we developed a nonlinear observer to estimate the 3-D motion of the object online. With a full consideration of dynamic responses of the robot manipulator, we employ the Lyapunov method to prove asymptotic convergence of the image errors. Experimental results are used to demonstrate the performance of the proposed approach.

Patent
09 Feb 2011
TL;DR: In this article, a system for providing an adjustable depth of field in a photographic image is described, which comprises a plurality of buffers, each configured to store an image associated with a different wavelength of light, each of the images having a different focal plane related to the associated wavelength.
Abstract: A system is provided for providing an adjustable depth of field in a photographic image The system comprises a plurality of buffers, each configured to store an image associated with a different wavelength of light, each of the images having a different focal plane related to the associated wavelength The system further comprises an algorithm configured to accept an input specifying the depth of field and a focal plane and further configured to produce a photograph with the specified depth of field and focal plane, wherein the algorithm applies the specified depth of field around the specified focal plane, the specified focal plane being associated with a focal plane of one of the images stored in one of the buffers

Patent
Huang Jyun-Hao1
28 Sep 2011
TL;DR: In this article, a method for processing an image captured by a fisheye lens of an image capture device was proposed, which obtains a point (Px, Py) from an object plane of the fisheeye lens, calculates a first projection point (Fx*, Fy*, Fz*) of the obtained point on a first image plane of a virtual lens, and then calculates a second projection point(Fx, Fy) of the point (fx*, fy*, fz*) on a second image plane, and obtains transforming
Abstract: A method for processing an image captured by a fisheye lens of an image capture device. The method obtains a point (Px, Py) from an object plane of the fisheye lens, calculates a first projection point (Fx*, Fy*, Fz*) of the obtained point (Px, Py) on a first image plane of a virtual lens, calculates a second projection point (Fx, Fy) of the point (Fx*, Fy*, Fz*) on a second image plane of the fisheye lens, and obtains transforming formulae between (Px, Py) and (Fx, Fy). The method further obtains a back-projection point for each point of the captured image on the object plane of the fisheye lens according to the transforming formulae, and creates an updated image of the specified scene from the back-projection points.

Patent
22 Sep 2011
TL;DR: In this paper, a digital camera system configurable to operate in a low-resolution refocusable mode and a high-resolution non-refocusability mode is presented, where an adaptor is inserted between the imaging lens and the image sensor.
Abstract: A digital camera system configurable to operate in a low-resolution refocusable mode and a high-resolution non-refocusable mode comprising: a camera body; an image sensor mounted in the camera body having a plurality of sensor pixels for capturing a digital image; an imaging lens for forming an image of a scene onto an image plane, the imaging lens having an aperture; and an adaptor that can be inserted between the imaging lens and the image sensor to provide the low-resolution refocusable mode and can be removed to provide the high-resolution non-refocusable mode, the adaptor including a microlens array with a plurality of microlenses; wherein when the adaptor is inserted to provide the low-resolution refocusable mode, the microlens array is positioned between the imaging lens and the image sensor.

Patent
10 Jun 2011
TL;DR: In this article, a virtual object display assembly is provided with: a display element with a screen operable to display an image of an object; a beam splitter positioned at an angle between the screen and the set assembly; and a mask display element positioned within the set-assembly at an image plane associated with the displayed image.
Abstract: An apparatus for displaying virtual objects within a physical set or scene. The apparatus includes a set assembly including at least one physical object, e.g., a background prop. A virtual object display assembly is provided with: a display element with a screen operable to display an image of an object; a beam splitter positioned at an angle between the screen and the set assembly; and a mask display element positioned within the set assembly at an image plane associated with the displayed image. The mask display element operates, when the display element operates to display the displayed image, to display a mask corresponding to the displayed image. The beam splitter is transmissive and reflective of light. The mask display element is positioned between the beam splitter and the physical object, and the displayed mask occludes a portion of the physical object and casts a shadow within the set assembly.

Proceedings ArticleDOI
18 Nov 2011
TL;DR: The proposed approach combines monocular detection with stereo-vision for on-road vehicle localization and tracking for driver assistance and fusing information from both the monocular and stereo modalities.
Abstract: In this paper, we introduce a novel stereo-monocular fusion approach to on-road localization and tracking of vehicles. Utilizing a calibrated stereo-vision rig, the proposed approach combines monocular detection with stereo-vision for on-road vehicle localization and tracking for driver assistance. The system initially acquires synchronized monocular frames and calculates depth maps from the stereo rig. The system then detects vehicles in the image plane using an active learning-based monocular vision approach. Using the image coordinates of detected vehicles, the system then localizes the vehicles in real-world coordinates using the calculated depth map. The vehicles are tracked both in the image plane, and in real-world coordinates, fusing information from both the monocular and stereo modalities. Vehicles' states are estimated and tracked using Kalman filtering. Quantitative analysis of tracks is provided. The full system takes 46ms to process a single frame.

Journal ArticleDOI
01 Apr 2011
TL;DR: A new automatic diffusion curve coloring algorithm is introduced, and the diffusion curve representation itself extends to storing any number of attributes in an image, and is demonstrated with image stippling an hatching applications.
Abstract: Diffusion curves are a powerful vector graphic representation that stores an image as a set of 2D Bezier curves with colors defined on either side. These colors are diffused over the image plane, resulting in smooth color regions as well as sharp boundaries. In this paper, we introduce a new automatic diffusion curve coloring algorithm. We start by defining a geometric heuristic for the maximum density of color control points along the image curves. Following this, we present a new algorithm to set the colors of these points so that the resulting diffused image is as close as possible to a source image in a least squares sense. We compare our coloring solution to the existing one which fails for textured regions, small features, and inaccurately placed curves. The second contribution of the paper is to extend the diffusion curve representation to include texture details based on Gabor noise. Like the curves themselves, the defined texture is resolution independent, and represented compactly. We define methods to automatically make an initial guess for the noise texure, and we provide intuitive manual controls to edit the parameters of the Gabor noise. Finally, we show that the diffusion curve representation itself extends to storing any number of attributes in an image, and we demonstrate this functionality with image stippling an hatching applications.

Journal ArticleDOI
TL;DR: In this paper, a method for the calibration of a 3D laser scanner for robotic applications is proposed, based on the modeling of the geometrical relationship between the 3D coordinates of the laser stripe on the target and its digital coordinates in the image plane.
Abstract: The calibration of a three-dimensional digitizer is a very important issue to take into consideration that good quality, reliability, accuracy and high repeatability are the features which a good digitizer is expected to have. The aim of this paper is to propose a new method for the calibration of a 3-D laser scanner, mainly for robotic applications. The acquisition system consists of a laser emitter and a webcam with fixed relative positions. In addition, a cylindrical lens is provided with the laser housing so that it is capable to project a plane light. An optical filter was also used in order to segment the laser stripe from the rest of the scene. For the calibration procedure it was used a digital micrometer that move a target with known dimensions. The calibration method is based on the modeling of the geometrical relationship between the 3-D coordinates of the laser stripe on the target and its digital coordinates in the image plane. By this method it is possible to calibrate the intrinsic parameters of the video system, the position of the image plane and the laser plane in a given frame, all in the same time.

Journal ArticleDOI
TL;DR: The proposed method employs a refractive index medium between the elemental image plane and the lens array for viewing angle enhancement in the InIm and shows that the viewing angle is doubled.
Abstract: In the integral imaging system, the viewing angle is limited by the size and focal length of the elemental lens. In this regard, we propose a new method for the viewing angle enhancement in the InIm. The proposed method employs a refractive index medium between the elemental image plane and the lens array. The viewing angle enhanced InIm display is analyzed based on the imaging terms. The experimental result shows that the viewing angle is doubled.

Journal ArticleDOI
TL;DR: Unified interpretation for the real and pseudo moiré phenomena using the concept of biased and unbiased frequency pairs in the Fourier spectrum is given.
Abstract: Unified interpretation for the real and pseudo moire phenomena using the concept of biased and unbiased frequency pairs in the Fourier spectrum is given Intensity modulations are responsible for pseudo moire appearance in the image plane rather than average intensity variations dominating real moire Detection of pseudo moire necessitates resolving superimposed structures in the image plane In the case of the product type superimposition generating both real and pseudo moire, our interpretation utilizes the Fourier domain information only The moire pattern characteristics such as an effective carrier, modulation and bias intensity distributions can be readily predicted We corroborate them using two-dimensional continuous wavelet transform and fast adaptive bidimensional empirical mode decomposition methods as complementary image processing tools

Patent
08 Apr 2011
TL;DR: In this paper, a columnar space model is used to generate an output image based on input images obtained by image-taking parts, which is a combination of a plurality of space model parts each having a reference axis.
Abstract: An image generation device generates an output image based on input images obtained by image-taking parts. A coordinates correspondence part causes coordinates on a columnar space model arranged to surround a body to be operated to correspond to coordinates on input image planes on which the input images are positioned, respectively. An output image generation part causes values of the coordinates on the input image planes to correspond to values of the coordinates on an output image plane on which the output image is positioned through coordinates on the columnar space model, which is a combination of a plurality of space model parts each having a reference axis. The space model corresponds to a pair of adjacent image-taking parts among the image-taking parts, and an optical axis of each of the pair of image-taking parts intersects with the reference axis of a corresponding one of the space model parts.

Patent
Xuerui Zhang1, Ning Bi1, Yingyong Qi1
28 Nov 2011
TL;DR: In this article, a 3D mixed-reality system combines a real 3D image or video captured by a camera with a virtual 3D object rendered by a computer or other machine.
Abstract: A three dimensional (3D) mixed reality system combines a real 3D image or video, captured by a 3D camera for example, with a virtual 3D image rendered by a computer or other machine to render a 3D mixed-reality image or video. A 3D camera can acquire two separate images (a left and a right) of a common scene, and superimpose the two separate images to create a real image with a 3D depth effect. The 3D mixed-reality system can determine a distance to a zero disparity plane for the real 3D image, determine one or more parameters for a projection matrix based on the distance to the zero disparity plane, render a virtual 3D object based on the projection matrix, combine the real image and the virtual 3D object to generate a mixed-reality 3D image.