scispace - formally typeset
Search or ask a question

Showing papers on "Real image published in 1992"


Journal ArticleDOI
TL;DR: A camera model that accounts for major sources of camera distortion, namely, radial, decentering, and thin prism distortions is presented and a type of measure is introduced that can be used to directly evaluate the performance of calibration and compare calibrations among different systems.
Abstract: A camera model that accounts for major sources of camera distortion, namely, radial, decentering, and thin prism distortions is presented. The proposed calibration procedure consists of two steps: (1) the calibration parameters are estimated using a closed-form solution based on a distribution-free camera model; and (2) the parameters estimated in the first step are improved iteratively through a nonlinear optimization, taking into account camera distortions. According to minimum variance estimation, the objective function to be minimized is the mean-square discrepancy between the observed image points and their inferred image projections computed with the estimated calibration parameters. The authors introduce a type of measure that can be used to directly evaluate the performance of calibration and compare calibrations among different systems. The validity and performance of the calibration procedure are tested with both synthetic data and real images taken by tele- and wide-angle lenses. >

1,896 citations


Journal ArticleDOI
TL;DR: In this article, a deformable template is used to detect and describe features of faces using deformable templates and an energy function is defined which links edges, peaks, and valleys in the image intensity to corresponding properties of the template.
Abstract: We propose a method for detecting and describing features of faces using deformable templates. The feature of interest, an eye for example, is described by a parameterized template. An energy function is defined which links edges, peaks, and valleys in the image intensity to corresponding properties of the template. The template then interacts dynamically with the image by altering its parameter values to minimize the energy function, thereby deforming itself to find the best fit. The final parametr values can be used as descriptors for the feature. We illustrate this method by showing deformable templates detecting eyes and mouths in real images. We demonstrate their ability for tracking features.

1,375 citations


Book
03 Jan 1992
TL;DR: The experimental work shows that for good constancy, a color constancy system will need to adjust the gain of the receptors it employs in a fashion analagous to adaptation in humans.
Abstract: Color constancy is the skill by which it is possible to tell the color of an object even under a colored light. I interpret the color of an object as its color under a fixed canonical light, rather than as a surface reflectance function. This leads to an analysis that shows two distinct sets of circumstances under which color constancy is possible. In this framework, color constancy requires estimating the illuminant under which the image was taken. The estimate is then used to choose one of a set of linear maps, which is applied to the image to yield a color descriptor at each point. This set of maps is computed in advance. The illuminant can be estimated using image measurements alone, because, given a number of weak assumptions detailed in the text, the color of the illuminant is constrained by the colors observed in the image. This constraint arises from the fact that surfaces can reflect no more light than is cast on them. For example, if one observes a patch that excites the red receptor strongly, the illuminant cannot have been deep blue. Two algorithms are possible using this constraint, corresponding to different assumptions about the world. The first algorithm, Crule will work for any surface reflectance. Crule corresponds to a form of coefficient rule, but obtains the coefficients by using constraints on illuminant color. The set of illuminants for which Crule will be successful depends strongly on the choice of photoreceptors: for narrowband photoreceptors, Crule will work in an unrestricted world. The second algorithm, Mwext, requires that both surface reflectances and illuminants be chosen from finite dimensional spaces; but under these restrictive conditions it can recover a large number of parameters in the illuminant, and is not an attractive model of human color constancy. Crule has been tested on real images of Mondriaans, and works well. I show results for Crule and for the Retinex algorithm of Land (Land 1971; Land 1983; Land 1985) operating on a number of real images. The experimental work shows that for good constancy, a color constancy system will need to adjust the gain of the receptors it employs in a fashion analagous to adaptation in humans.

548 citations


Journal ArticleDOI
TL;DR: In this paper, a method for estimating the 3D shape of objects and the motion of the camera from a stream of images is proposed, based on the Singular Value Decomposition.
Abstract: We propose a method for estimating the three-dimensional shape of objects and the motion of the camera from a stream of images. The goal is to give a robot the ability to localize itself with respect to the environment, draw a map of its own surroundings, and perceive the shape of objects in order to recognize or grasp them. Solutions proposed in the past were so sensitive to noise as to be of little use in practical applications. This sensitivity is closely related to the viewer-centered representation of scene geometry known as a depth map, and to the use of stereo triangulation to infer depth from the images. In fact, when objects are more than a few focal lengths away from the camera, parallax effects become subtle, and even a small amount of noise in the images produces large errors in the final shape and motion results. In our formulation, we represent shape in object-centered coordinates, and model image formation by orthographic, rather than perspective projection. In this way, depth, the distance between viewer and scene, plays no role, and the problem's sensitivity to noise is critically reduced. We collect the image coordinates of P feature points tracked through F frames into a 2F x P measurement matrix. If these coordinates are measured with respect to their centroid, we show that the measurement matrix can be written as the product of two matrices that represent the camera rotation and the positions of the feature points in space. The bilinear nature of this model, and its matrix formulation, lead to a factorization method for the computation of shape and motion, based on the Singular Value Decomposition. Previous solutions assumed motion to be smooth, in one form or another, in an attempt to constrain the solution and achieve reliable convergence. The factorization method, on the other hand, makes no assumption about the camera motion, and can deal with the large jumps from frame to frame found, for instance, in sequences taken with a hand-held camera. To make the factorization method into a working system, we solve several corollary problems: how to select image features, how to track them from frame to frame, how to deal with occlusions, and how to cope with the noise and artifacts that corrupt images recorded with ordinary equipment. We test the entire system with a series of experiments on real images taken both in the lab, for an accurate performance evaluation, and outdoors, to demonstrate the applicability of the method in real-life situations.

328 citations


Journal ArticleDOI
01 Jan 1992
TL;DR: A computational vision approach is presented for the estimation of 2-D translation, rotation, and scale from two partially overlapping images in a fast method that produces good results even when large rotation and translation have occurred between the two frames and the images are devoid of significant features.
Abstract: A computational vision approach is presented for the estimation of 2-D translation, rotation, and scale from two partially overlapping images. The approach results in a fast method that produces good results even when large rotation and translation have occurred between the two frames and the images are devoid of significant features. An illuminant direction estimation method is first used to obtain an initial estimation of camera rotation. A small number of feature points are then located, using a Gabor wavelet model for detecting local curvature discontinuities. An initial estimate of scale and translation is obtained by pairwise matching of the feature points detected from both frames. Finally, hierarchical feature matching is performed to obtain an accurate estimate of translation, rotation and scale. A method for error analysis of matching results is also presented. Experiments with synthetic and real images show that this algorithm yields accurate results when the scale of the images differ by up to 10%, the overlap between the two frames is as small as 23%, and the camera rotation between the two frames is significant. Experimental results and applications are presented. >

256 citations


Journal ArticleDOI
TL;DR: This study makes it possible to design an algorithm for detecting boundaries in the images that are likely to be extremal, and provides a better understanding of the relationship between the apparent and real shape of a 3-D object as well as algorithms for reconstructing the local shape of such an object along the rims.
Abstract: The extremal boundaries, of 3-D curved objects are the images of special curves drawn on the object and are called rims They are viewpoint dependent and characterized by the fact that the optical rays of their points are tangential to the surface of the object The mathematics of the relationship between the extremal boundaries and the surface of the object is studied This study makes it possible to design an algorithm for detecting those boundaries in the images that are likely to be extremal Once this has been done, one can reconstruct the rims and compute the differential properties of the surface of the object along them up to the second order If a qualitative description is sufficient, the sign of the Gaussian curvature of the surface along the rim can be computed in a much simpler way Experimental results are presented on synthetic and real images The work provides a better understanding of the relationship between the apparent and real shape of a 3-D object as well as algorithms for reconstructing the local shape of such an object along the rims >

233 citations


Journal ArticleDOI
TL;DR: Several generalizations of the fuzzy c-shells (FCS) algorithm are presented for characterizing and detecting clusters that are hyperellipsoidal shells and show that the AFCS algorithm requires less memory than the HT-based methods, and it is at least an order of magnitude faster than theHT approach.
Abstract: Several generalizations of the fuzzy c-shells (FCS) algorithm are presented for characterizing and detecting clusters that are hyperellipsoidal shells. An earlier generalization, the adaptive fuzzy c-shells (AFCS) algorithm, is examined in detail and is found to have global convergence problems when the shapes to be detected are partial. New formulations are considered wherein the norm inducing matrix in the distance metric is unconstrained in contrast to the AFCS algorithm. The resulting algorithm, called the AFCS-U algorithm, performs better for partial shapes. Another formulation based on the second-order quadrics equation is considered. These algorithms can detect ellipses and circles in 2D data. They are compared with the Hough transform (HT)-based methods for ellipse detection. Existing HT-based methods for ellipse detection are evaluated, and a multistage method incorporating the good features of all the methods is used for comparison. Numerical examples of real image data show that the AFCS algorithm requires less memory than the HT-based methods, and it is at least an order of magnitude faster than the HT approach. >

205 citations


30 Oct 1992
TL;DR: Techniques for approximating the common global illumination for RVIs and CGIs, assuming some elements of the scene geometry of the real world and common viewing parameters are known are assumed, are presented.
Abstract: The ability to merge a real video image (RVI) with a computer-generated image (CGI) enhances the usefulness of both. To go beyond ``cut and paste'''' and chroma-keying, and merge the two images successfully, one must solve the problems of common viewing parameters, common visibility and common illumination. The results can be dubbed Computer Augmented Reality (CAR). We present in this paper techniques for approximating the common global illumination for RVIs and CGIs, assuming some elements of the scene geometry of the real world and common viewing parameters are known. Since the real image is a projection of the exact solution for the global illumination in the real world (done by nature), we approximate the global illumination of the merged image by making the RVI part of the solution to the common global illumination computation. The objects in the real scene are replaced by few boxes covering them; the image intensity of the RVI is used as the initial surface radiosity of the visible part of the boxes; the surface reflectance of the boxes is approximated by subtracting an estimate of the illuminant intensity based on the concept of ambient light; finally global illumination using a classic radiosity computation is used to render the surface of the CGIs with respect to their new environment and for calculating the amount of image intensity correction needed for surfaces of the real image. An example animation testing these techniques has been produced. Most of the geometric problems have been solved in a relatively ad hoc manner. The viewing parameters were extracted by interactively matching of the synthetic scene with the RVIs. The visibility is determined by the relative position of the ``blocks'''' representing the real objects and the computer generated objects, and a moving computer generated light has been inserted. The results of the merging are encouraging, and would be effective for many applications.

176 citations


Patent
07 Sep 1992
TL;DR: A waveguide virtual image display as discussed by the authors provides a real image at an inlet of an optical waveguide with diffractive optical elements that magnify and filter the real image and produce a virtual image at a viewing aperture.
Abstract: A waveguide virtual image display (15) including image generation apparatus (22) providing a real image at an inlet of an optical waveguide (20). The real image being reflected a plurality of times within the optical waveguide (20) by diffractive optical elements (23, 25, 26, 27, 28) that magnify and filter the real image and produce a virtual image at a viewing aperture

170 citations


Book ChapterDOI
19 May 1992
TL;DR: A method of eliminating the reflectance effects in order to calculate a shading field that depends only on the relative positions of the illuminant and surface and the resulting Poisson equation is solved directly by the Fourier transform method.
Abstract: Existing shape-from-shading algorithms assume constant reflectance across the shaded surface. Multi-colored surfaces are excluded because both shading and reflectance affect the measured image intensity. Given a standard RGB color image, we describe a method of eliminating the reflectance effects in order to calculate a shading field that depends only on the relative positions of the illuminant and surface. Of course, shading recovery is closely tied to lightness recovery and our method follows from the work of Land [10, 9], Horn [7] and Blake [1]. In the luminance image, R+G+B, shading and reflectance are confounded. Reflectance changes are located and removed from the luminance image by thresholding the gradient of its logarithm at locations of abrupt chromaticity change. Thresholding can lead to gradient fields which are not conservative (do not have zero curl everywhere and are not integrable) and therefore do not represent realizable shading fields. By applying a new curl-correction technique at the thresholded locations, the thresholding is improved and the gradient fields are forced to be conservative. The resulting Poisson equation is solved directly by the Fourier transform method. Experiments with real images are presented.

165 citations


Journal ArticleDOI
Karl Rohr1
TL;DR: An analytical approximation of the parametric model of a certain class of characteristic intensity variations in Rohr is developed in such a way that function values can be calculated without numerical integration.
Abstract: The parametric model of a certain class of characteristic intensity variations in Rohr (1990, 1992), which is the superposition of elementary model functions, is employed to identify corners in images. Estimates of the searched model parameters characterizing completely single grey-value structures are determined by a least-squares fit of the model to the observed image intensities applying the minimization method of Levenberg-Marquardt. In particular, we develop an analytical approximation of our model in such a way that function values can be calculated without numerical integration. Assuming the blur of the imaging system to be describable by Gaussian convolution our approach permits subpixel localization of the corner position of the unblurred grey-value structures, that is, to reverse the blur of the imaging system. By fitting our model to the original as well as to the smoothed original-image cues can be obtained for finding out whether the underlying model is an adequate description or not. Results are shown for real image data.

Patent
Brent D. Larson1
31 Dec 1992
TL;DR: In this article, a retroreflective array projection screen for displaying virtual images wherein the apparent distances from the observer to the viewed subject is greater than the distance from the user to the retro reflective screen is presented.
Abstract: A retroreflective array projection screen for displaying virtual images wherein the apparent distances from the observer to the viewed subject is greater than the distance from the observer to the retroreflective screen. Real images from a source are collimated and then partially reflected onto the retroreflective array. Virtual images are reflected from the array through the beamsplitter onto an observer at an exit pupil.

Journal ArticleDOI
TL;DR: A depth estimation algorithm proposed by A.P. Pentland (1987) is generalized and experimental results indicate that the depth estimation errors are kept within 5% of true values on the average when it is applied to real images.
Abstract: A depth estimation algorithm proposed by A.P. Pentland (1987) is generalized. In the proposed algorithm, the raw image data in the vicinity of the edge is used to estimate the depth from defocus. Since no differentiation operation on the image data is required before the optimization process, the method is less sensitive to the noise disturbance of measurements. Furthermore, the edge orientation that was critical in Pentland's approach will not be required in the case. This algorithm is then applied to synthetic images containing various amounts of noise to test its performance. Experimental results indicate that the depth estimation errors are kept within 5% of true values on the average when it is applied to real images. >

Journal ArticleDOI
TL;DR: The dynamic generalized Hough transform (DGHT) provides an efficient feedback mechanism linking the accumulated boundary point evidence and the contributing boundary point data and introduces the opportunity for parallel calculation and accumulation of parameters.
Abstract: Parametric transformation is a powerful tool in shape analysis. Major shortcomings of the technique are excessive storage requirements and computational complexity. Using a standard Hough transform (SHT) algorithm, each image point votes independently for all instances of the shape under detection on which it may lie. In this way a great redundancy of evidence concerning the image is generated. A new type of Hough-like techniques have evolved, the probabilistic Hough transforms (PHTs). These attempt to reduce the generation of redundant information by sampling the image data in various ways. The dynamic generalized Hough transform (DGHT) belongs to this family. It differs from other SHT and PHT algorithms in two fundamental ways. The first difference is that the algorithm selects a single connected point, ( x c , y c ), and uses this point to seed the transformation. The n parameters associated with the shape under detection are calculated using ( x c , y c ) together with sets of ( n − 1) randomly sampled image points. In this way voting is constrained to be on the hypersurface in transform space which would be generated by the standard transformation of ( x c , y c ). The algorithm maps each set of n image points to a single point on this surface. Voting is further restricted by appropriate segmentation of the image data. The second fundamental difference exploits the production of a sparse transform space by projecting the results of the transformation onto the axes of the n -dimensional transform space. Hence if T is the resolution in transform space and n is the number of parameters under detection then use of the DGHT reduces memory requirements from T n to nT and introduces the opportunity for parallel calculation and accumulation of parameters. Essential to the efficient use of the probabilistic techniques are stopping criteria which ensure adequate sampling with respect to the desired detection result and which also give optimum computational savings. A robust stopping criterion is deduced for the DGHT. This is applied to the concurrent detection of circles and ellipses using real image data over a range and variety of noise conditions. It is shown that the DGHT copes well with both occlusion and the effects of correlated noise. In addition, the method provides an efficient feedback mechanism linking the accumulated boundary point evidence and the contributing boundary point data. It achieves this goal automatically with an intelligent monitoring of the transformation.

Journal ArticleDOI
TL;DR: An algorithm for reconstructing the surface shape of a nonrigid transparent object, such as water, from the apparent motion of the observed pattern is described, based on the optical and statistical analysis of the distortions.
Abstract: The appearance of a pattern behind a transparent, moving object is distorted by refraction at the moving object's surface. An algorithm for reconstructing the surface shape of a nonrigid transparent object, such as water, from the apparent motion of the observed pattern is described. This algorithm is based on the optical and statistical analysis of the distortions. It consists of four steps: extraction of optical flow, averaging of each point trajectory obtained from the optical flow sequence, calculation of the surface normal using optical characteristics, and reconstruction of the surface. The algorithm is applied to both synthetic and real images to demonstrate its performance. >

Patent
27 Apr 1992
TL;DR: In this article, a prior parametric modeling is carried out of the filming system from real images each containing at least one test point whose geometrical position in real space is known, the position of these test points in the image being transmitted to the computer at the same time as the measured values of captured physical dimensions, characteristic of the geometry and of the optics of the real camera and of its support For each image.
Abstract: A method of producing images comprising a free combination of real images obtained by a filming system containing a real camera and of synthetic images obtained through a graphics computer equipped with image synthesizing software, wherein: a prior parametric modeling is carried out of the filming system from real images each containing at least one test point whose geometrical position in real space is known, the position of these test points in the image being transmitted to the computer at the same time as the measured values of captured physical dimensions, characteristic of the geometry and of the optics of the real camera and of its support For each image; then, the abovementioned physical dimensions captured for each image are measured in synchronism with the generation of the real images; the captured dimensions are transmitted in real time and in synchronism with the generation of images to the graphics computer in order to determine the parameters of the model of the filming system; a sequence of synthetic digital images is generated whilst deducing from these parameters the computational characteristics of each synthetic image so as to obtain a perfect geometrical coherence between each real image and each corresponding synthetic image; and part of the real image is mixed with part of the synthetic image.

Proceedings ArticleDOI
03 Jan 1992
TL;DR: An approach to shape-from-shading that is based on a connection with a calculus of variations/optimal control problem is proposed, leading naturally to an algorithm for shape reconstruction that is simple, fast, provably convergent, and does not require regularization.
Abstract: An approach to shape-from-shading that is based on a connection with a calculus of variations/optimal control problem is proposed. An explicit representation corresponding to a shaded image is given for the surface; uniqueness of the surface (under suitable conditions) is an immediate consequence. The approach leads naturally to an algorithm for shape reconstruction that is simple, fast, provably convergent (in many cases, provably convergent to the correct solution), and does not require regularization. Given a continuous image, the algorithm can be proved to converge to the continuous surface solution as the image sampling frequency is taken to infinity. Experimental results are presented for synthetic and real images. >

Book
01 Aug 1992
TL;DR: This book describes several approaches to the analysis of moving scenes using information from a stereovision system and shows that motion estimation can be further improved by the explicit modeling of uncertainty in geometric objects.
Abstract: This book describes several approaches to the analysis of moving scenes using information from a stereovision system. In particular two different methods are proposed to deal with long and short sequences of images of an unknown environment including an arbitrary number of rigid and mobile objects. Results obtained from stereovision systems are found to be superior to those from monocular image systems, which are often very sensitive to noise. It is shown that motion estimation can be further improved by the explicit modeling of uncertainty in geometric objects. The techniques developed in this book have been successfully demonstrated with a large number of real images in the context of visual navigation of a mobile robot

Journal ArticleDOI
TL;DR: A general analytical model is developed for the structural grey-value transitions in an image which is the superposition of elementary model functions and it will be shown that this parametric model agrees fairly well with real image intensities.

Book ChapterDOI
19 May 1992
TL;DR: The use of regions as primitives for tracking enables to directly handle consistent object-level entities and a motion-based segmentation process based on normal flows and first order motion models provide instantaneous measurements.
Abstract: This paper addresses the problem of object tracking in a sequence of monocular images. The use of regions as primitives for tracking enables to directly handle consistent object-level entities. A motion-based segmentation process based on normal flows and first order motion models provide instantaneous measurements. Shape, position and motion of each region present in such segmented images are estimated with a recursive algorithm along the sequence. Occlusion situations can be handled. We have carried out experiments on sequences of real images depicting complex outdoor scenes.

Patent
25 Mar 1992
TL;DR: In this paper, a system for producing the sensation of 3D viewing by projecting and displaying a foreground image and a background image onto separate projection screens positioned on a single viewing axis is disclosed.
Abstract: Systems and methods for producing the sensation of 3-dimensional viewing by projecting and displaying a foreground image and a background image onto separate projection screens positioned on a single viewing axis are disclosed. The images are viewed simultaneously and appear as a single image having depth characteristics. The foreground image screen and the background image screen are provided with image transmission, reflection, and absorption characteristics which allow for simultaneous viewing of more than one screen without image "bleed" or excessive loss of image intensity.

Journal ArticleDOI
TL;DR: It is shown that the spatial structure of the viewed motion of rigid objects, with the exception of surface and motion boundaries, can usually be approximated over rather large regions of the image plane by a linear vector field.
Abstract: In this paper the first-order spatial properties of optical flow, such as singular points and elementary motions, are used to describe and analyze moving images. First, it is shown that the spatial structure of the viewed motion of rigid objects, with the exception of surface and motion boundaries, can usually be approximated over rather large regions of the image plane by a linear vector field. Second, a method is proposed in which optical flow is computed as a piecewise linear vector field. The obtained first-order properties of optical flow are then used to devise methods for distinguishing between different kinds of motion, extracting qualitative information about shape, and segmenting the viewed image into the different moving objects. Experimental results on several sequences of real images are presented and discussed. Since the methods which are proposed rely upon patchwise and not pointwise flow estimates, the obtained results are usually very good and insensitive to noise. It is concluded that the first-order properties of optical flow are very helpful for the understanding of visual motion.

ReportDOI
01 Feb 1992
TL;DR: It is shown that the set of 2D images produced by the point features of a rigid 3D model can be represented with two lines in two high-dimensional spaces, the lowest-dimensional representation possible.
Abstract: : We show that the set of 2D images produced by the point features of a rigid 3D model can be represented with two lines in two high-dimensional spaces. These lines are the lowest-dimensional representation possible. We use this result to build a system for representing in a hash table at compile time, all the images that groups of model features can produce. Then at run time a group of image features can access the table and find all model groups that could match it. This table is efficient in terms of space, and is built and accessed through analytic methods that account for the effect of sensing error. In real images, it reduces the set of potential matches by a factor of several thousand. We also use this representation of a model's images to analyze two other approaches to recognition: invariants, and non-accidental properties. These are properties of images that some models always produce, and all other models either never produce (invariants) or almost never produce (non-accidental properties). In several domains we determine when invariants exist. In general we show that there is an infinite set of non-accidental properties that are qualitatively similar.... Object recognition, Non-accidental properties, Indexing, Hashing, Invariants, Space efficiency.

Proceedings ArticleDOI
15 Jun 1992
TL;DR: Progress in building a model-based vision system for plane objects that uses algebraic projective invariants and the recognition system is described, giving examples of the invariant techniques working on real images.
Abstract: Projectively invariant shape descriptors allow fast indexing into model libraries without the need for pose computation or camera calibration. Progress in building a model-based vision system for plane objects that uses algebraic projective invariants is described. A brief account of these descriptors is given, and the recognition system is described, giving examples of the invariant techniques working on real images. >

Journal ArticleDOI
TL;DR: An algorithm based on the adaptive thresholding method proposed by Yanowitz and Bruckstein and the Canny edge detector is proposed for the segmentation of real X-ray and C-scan images of composite materials.

Journal ArticleDOI
TL;DR: It is shown how to recover the position and orientation of a pair of known coplanar conies from a single perspective image and concludes that using such methods enables accurate pose determination for arbitrary plane curves.

Patent
03 Dec 1992
TL;DR: In this article, concave mirrors are placed in a unique arrangement with respect to one another, so that the clarity and image produced therefrom is enhanced, and a videotape option allows a whole image to be broadcast and real objects merged with the broadcast image so the entire combined image appears 3D and realistic to a properly located observer.
Abstract: Apparatus for creating a three-dimensional image in space of an object. The device employs concave mirrors positioned in a unique arrangement with respect to one another so that the clarity and image produced therefrom is enhanced. Additionally, a videotape option allows a whole image to be broadcast and real objects merged with the broadcast image so that the entire combined image appears three-dimensional and realistic to a properly located observer.

Journal ArticleDOI
TL;DR: The detailed descriptions of the system's models, data structures, and matching mechanism, as well as the introduction to a method for the generation of symbolic models, are the main topics of the present paper.

Journal ArticleDOI
TL;DR: In this paper, the complex image method is used to model ground electrodes in layered soils, where the image locations and amplitudes are determined through a simple Prony method, and the results obtained are identical to those reported recently in the literature, given by more than 10,000 images using the conventional electrostatic image method.
Abstract: In this paper, the complex image method is used to model ground electrodes in layered soils. The image locations and amplitudes are determined through a simple Prony method [R. W. Hamming, Numerical Methods for Scientists and Engineers (Dover, New York, 1973), pp. 620–622]. As an example, a toroidal electrode in a four‐layer soil is modeled using one real image and four complex images. The results obtained are identical to those reported recently in the literature, given by more than 10 000 images using the conventional electrostatic image method.

Patent
02 Mar 1992
TL;DR: In this article, a stereoscopic image information correcting section corrects stereoscopic information in response to image drawing information transmitted from the input section, which can be easily generated based on the two-dimensional image information.
Abstract: Depth information generating section generates depth image information representing the class of the depth of each pixel in two-dimensional image information outputted from an image input section, according to depth information outputted from an input section. A stereoscopic image information generating section generates stereoscopic image information, based on the depth image information and the two-dimensional image information. Thereafter, an image for the left eye and an image for the right eye read by an image information output section are alternately displayed on the screen of an image display section under the control of a gate. In synchronization with the change-over operation of alternately displaying the image for the left eye and the image for the right eye, the stereoscopic section controls the opening and closing of liquid crystal shutters of liquid crystal shutter glasses. A stereoscopic image information correcting section corrects stereoscopic image information in response to image drawing information transmitted from the input section. Thus, a stereoscopic image can be easily generated based on the two-dimensional image information.