scispace - formally typeset
Search or ask a question

Showing papers on "Real image published in 1995"


Proceedings ArticleDOI
20 Jun 1995
TL;DR: A new information-theoretic approach is presented for finding the pose of an object in an image that works well in domains where edge or gradient-magnitude based methods have difficulty, yet it is more robust then traditional correlation.
Abstract: A new information-theoretic approach is presented for finding the pose of an object in an image. The technique does not require information about the surface properties of the object, besides its shape, and is robust with respect to variations of illumination. In our derivation, few assumptions are made about the nature of the imaging process. As a result, the algorithms are quite general and can foreseeably be used in a wide variety of imaging situations. Experiments are presented that demonstrate the approach in registering magnetic resonance images, aligning a complex 3D object model to real scenes including clutter and occlusion, tracking a human head in a video sequence and aligning a view-based 2D object model to real images. The method is based on a formulation of the mutual information between the model and the image. As applied in this paper, the technique is intensity-based, rather than feature-based. It works well in domains where edge or gradient-magnitude based methods have difficulty, yet it is more robust then traditional correlation. Additionally, it has an efficient implementation that is based on stochastic approximation. >

966 citations


Patent
19 May 1995
TL;DR: In this article, a system for reading a 2D image and comparing the two-dimensional image to stored data representative of a known image is described, which consists of a sensor for capturing the 2D images, which sensor includes a light source for projecting an emitted light towards the two dimensional image and an optical assembly for focussing light, which may be ambient and or emitted light from the light source, reflected from the framed twodimensional image onto a CMOS or CCD detector for detecting the focussed light, the detector including a photodiode array for sensing the foc
Abstract: A system for reading a two-dimensional image, and for comparing the two-dimensional image to stored data representative of a known image. The optical scanning device comprises a sensor for capturing the two-dimensional image, which sensor includes a light source for projecting an emitted light towards the two-dimensional image and an optical assembly for focussing light, which may be ambient and or emitted light from the light source, reflected from the framed two-dimensional image onto a CMOS or CCD detector for detecting the focussed light, the detector including a photodiode array for sensing the focussed light and generating a signal therefrom. Aiming of the sensor to read the two-dimensional image is facilitated by a frame locator consisting of a laser diode which emits a beam that is modified by optics, including diffractive optics, to divide the beam into beamlets which having a spacing therebetween that expands to match the dimensions of the field of view of the sensor, forming points of light at the target to define the edges of the field of view.

438 citations


Journal ArticleDOI
TL;DR: A visual model that gives a distortion measure for blocking artifacts in images is presented and results show that the error visibility predicted by the model correlates well with the subjective ranking.
Abstract: A visual model that gives a distortion measure for blocking artifacts in images is presented. Given the original and reproduced image as inputs, the model output is a numerical value that quantifies the visibility of blocking error in the reproduced image. The model is derived based on the human visual sensitivity to horizontal and vertical edge artifacts that result from blocking. Psychovisual experiments have been carried out to measure the visual sensitivity to these artifacts. In the experiments, typical edge artifacts are shown to subjects and the sensitivity to them is measured with the variation of background luminance, background activity, edge length, and edge amplitude. Synthetic test patterns are used as background images in the experiments. The sensitivity measures thus obtained are used to estimate the model parameters. The final model is tested on real images, and the results show that the error visibility predicted by the model correlates well with the subjective ranking. >

316 citations


Journal ArticleDOI
TL;DR: The trilinearity result is shown to be of much practical use in visual recognition by alignment-yielding a direct reprojection method that cuts through the computations of camera transformation, scene structure and epipolar geometry.
Abstract: In the general case, a trilinear relationship between three perspective views is shown to exist. The trilinearity result is shown to be of much practical use in visual recognition by alignment-yielding a direct reprojection method that cuts through the computations of camera transformation, scene structure and epipolar geometry. Moreover, the direct method is linear and sets a new lower theoretical bound on the minimal number of points that are required for a linear solution for the task of reprojection. The proof of the central result may be of further interest as it demonstrates certain regularities across homographics of the plane and introduces new view invariants. Experiments on simulated and real image data were conducted, including a comparative analysis with epipolar intersection and the linear combination methods, with results indicating a greater degree of robustness in practice and a higher level of performance in reprojection tasks. >

304 citations


Journal ArticleDOI
TL;DR: It is argued that an object-centered representation is most appropriate for this purpose because it naturally accommodates multiple sources of data, multiple images (including motion sequences of a rigid object), and self-occlusions.
Abstract: Our goal is to reconstruct both the shape and reflectance properties of surfaces from multiple images. We argue that an object-centered representation is most appropriate for this purpose because it naturally accommodates multiple sources of data, multiple images (including motion sequences of a rigid object), and self-occlusions. We then present a specific object-centered reconstruction method and its implementation. The method begins with an initial estimate of surface shape provided, for example, by triangulating the result of conventional stereo. The surface shape and reflectance properties are then iteratively adjusted to minimize an objective function that combines information from multiple input images. The objective function is a weighted sum of stereo, shading, and smoothness components, where the weight varies over the surface. For example, the stereo component is weighted more strongly where the surface projects onto highly textured areas in the images, and less strongly otherwise. Thus, each component has its greatest influence where its accuracy is likely to be greatest. Experimental results on both synthetic and real images are presented.

270 citations


Journal ArticleDOI
TL;DR: A system to read automatically the Italian license number of a car passing through a tollgate using a CCTV camera and a frame grabber card to acquire a rear-view image of the vehicle is presented.
Abstract: A system for the recognition of car license plates is presented The aim of the system is to read automatically the Italian license number of a car passing through a tollgate A CCTV camera and a frame grabber card are used to acquire a rear-view image of the vehicle The recognition process consists of three main phases First, a segmentation phase locates the license plate within the image Then, a procedure based upon feature projection estimates some image parameters needed to normalize the license plate characters Finally, the character recognizer extracts some feature points and uses template matching operators to get a robust solution under multiple acquisition conditions A test has been done on more than three thousand real images acquired under different weather and illumination conditions, thus obtaining a recognition rate close to 91% >

258 citations


Proceedings ArticleDOI
20 Jun 1995
TL;DR: This work proposes to use a recursive estimator for arm position, and to provide the estimator with error signals obtained by comparing the projected estimated arm position with that of the actual arm in the image.
Abstract: We address the problem of estimating the position and motion of a human arm in 3D without any constraints on its behavior and without the use of special markers. We model the arm as two truncated right-circular cones connected with spherical joints. We propose to use a recursive estimator for arm position, and to provide the estimator with error signals obtained by comparing the projected estimated arm position with that of the actual arm in the image. The system is demonstrated and tested on a real image sequence. >

245 citations


Proceedings ArticleDOI
20 Jun 1995
TL;DR: A simple and accurate method for internal camera calibration based on tracking image features through a sequence of images while the camera undergoes pure rotation that can be used both for laboratory calibration and for self calibration in autonomous robots.
Abstract: Describes a simple and accurate method for internal camera calibration based on tracking image features through a sequence of images while the camera undergoes pure rotation. A special calibration object is not required and the method can therefore be used both for laboratory calibration and for self calibration in autonomous robots. Experimental results with real images show that focal length and aspect ratio can be found to within 0.15 percent, and lens distortion error can be reduced to a fraction of a pixel. The location of the principal point and the location of the center of radial distortion can each be found to within a few pixels. We perform a simple analysis to show to what extent the various technical details affect the accuracy of the results. We show that having pure rotation is important if the features are derived from objects close to the camera. In the basic method accurate angle measurement is important. The need to accurately measure the angles can be eliminated by rotating the camera through a complete circle while taking an overlapping sequence of images and using the constraint that the sum of the angles must equal 960 degrees. >

196 citations


Journal ArticleDOI
TL;DR: It is shown how relative 3D reconstruction from point correspondences of multiple uncalibrated images can be achieved through reference points, and robustness of the method and reconstruction precision are discussed.
Abstract: In this article we show how relative 3D reconstruction from point correspondences of multiple uncalibrated images can be achieved through reference points. The original contributions with respect to related works in the field are mainly a direct global method for relative 3D reconstruction and a geometric method to select a correct set of reference points among all im age points. Experimental results from both simulated and real image sequences are presented, and robustness of the method and reconstruction precision of the results are discussed.

176 citations


Patent
Masanobu Kimura1
24 Oct 1995
TL;DR: In this article, a small video camera apparatus capable of imaging an object from various angles, and effectively monitoring an object or obtaining three-dimensional image information is presented, where a light beam supplied from an optical image passing through a lens and a prism is picked up by a left image sensing surface of the charge-coupled device.
Abstract: The present invention intends to provide a small video camera apparatus capable of imaging an object from various angles, and effectively monitoring an object or obtaining three-dimensional image information. A light beam supplied from an optical image passing through a lens is picked up by a right image sensing surface of a charge-coupled device, and a light beam supplied from an optical image passing through a lens and a prism is picked up by a left image sensing surface of the charge-coupled device. The image signals obtained by the right and left image sensing surfaces are divided in a color separating and signal processing circuit.

152 citations


01 Nov 1995
TL;DR: In this article, a two-dimensional edge detector is presented, which gives the edge position in an image with a sub-pixel accuracy with a low computational cost, and its implementation is very simple since it is derivated from the well-known Non-Maxima Suppression method.
Abstract: In this article we present a two dimensional edge detector which gives the edge position in an image with a sub-pixel accuracy. The method presented here gives an excellent accuracy (the position bias mean is almost zero and the standard deviation is less than one tenth of a pixel) with a low computational cost, and its implementation is very simple since it is derivated from the well-known Non-Maxima Suppression method \cite{canny:83,deriche:87}. We also justify the method by showing that it gives the exact result in a theoretical one dimensional example. We have tested the accuracy and robustness of the edge extractor on several synthetic and real images and both qualitative and quantitative results are reported in this paper.

Patent
07 Jun 1995
TL;DR: In this paper, a method for forming an image using conjugate optics in which light from a source (15) is focused to form a real image at a retro-reflector (23) and is reflected for viewing is presented.
Abstract: A head mounted display system (10) includes focusing optics (21) to focus light from an image source (15), retro-reflector (23) and beamsplitter (22), the image source (15) and beamsplitter (22) directing light toward the retro-reflector (23) to focus a real image, and the light reflected from the retro-reflector (23) being directed via the beamsplitter (22) for viewing, thereby to use conjugate optics effectively to place the viewing eye (13) or detector in relation to the image source (15) the same as the relation of the focusing optics (21) to the image source (15). The retro-reflector (23) may be mounted on a common housing (11) or support with the beamsplitter (22) and focusing optics (21) or it may be remotely located. A method for forming an image using conjugate optics in which light from a source (15) is focused to form a real image at a retro-reflector (23) and is reflected for viewing.

Proceedings ArticleDOI
20 Jun 1995
TL;DR: An approach to the tracking of complex shapes through image sequences, that combines deformable region models and deformable contours is described, which noticeably improves the tracking performances of deformable models in the presence of texture.
Abstract: The paper describes an approach to the tracking of complex shapes through image sequences, that combines deformable region models and deformable contours. A deformable region model is presented: its optimisation is based on texture correlation and is constrained by the use of a motion model, such as rigid, affine or homographic. The use of texture information (versus edge information) noticeably improves the tracking performances of deformable models in the presence of texture. Then the region contour is refined using an edge based deformable model in order to better deal with specularities, non planar objects and occlusions. The method is illustrated and validated by experimental results on real images. >

Proceedings ArticleDOI
Rakesh Kumar1, Padmanabhan Anandan1, Michal Irani1, J. Bergen1, Keith Hanna1 
21 Jun 1995
TL;DR: The central thesis of this paper is that the traditional approach to representation of information about scenes by relating each image to an abstract three dimensional coordinate system may not always be appropriate.
Abstract: The goal of computer vision is to extract information about the world from collections of images. This information might be used to recognize or manipulate objects, to control movement through the environment, to measure or determine the condition of objects, and for many other purposes. The goal of this paper is to consider the representation of information derived from a collection of images and how it may support some of these tasks. By "collection of images" we mean any set of images relevant to a given scene. This includes video sequences, multiple images from a single still camera, or multiple images from different cameras. The central thesis of this paper is that the traditional approach to representation of information about scenes by relating each image to an abstract three dimensional coordinate system may not always be appropriate. An approach that more directly represents the relationships among the collection of images has a number of advantages. These relationships can also be computed using practical and efficient algorithms. We present a hierarchical framework for scene representation. We develop the algorithms used to build these representations and demonstrate results on real image sequences. Finally, the application of these representations to real world problems is discussed.

Journal ArticleDOI
TL;DR: This paper shows how a fuzzy operator that is able to perform detail sharpening but is insensitive to noise can be designed and the results obtainable in the enhancement of a real image.
Abstract: Rule-based fuzzy operators are a novel class of operators specifically designed in order to apply the principles of approximate reasoning to digital image processing. This paper shows how a fuzzy operator that is able to perform detail sharpening but is insensitive to noise can be designed. The results obtainable by the proposed technique in the enhancement of a real image are presented. >

01 Jan 1995
TL;DR: The quality of the results increases if Gaussian masks of larger width are used in the derivation process instead of simple 3 x 3 masks as suggested in the underlying papers, and multiresolution approaches can be applied to color images when usingGaussian masks with different standard deviations in the edge detection scheme.
Abstract: Several approaches of different complexity already exist to edge detection in color images. Nevertheless, the question remains of how different are the results when employing computational costly techniques instead of simple ones. This paper presents a comparative study on different approaches to color edge detection. The approaches are based on the Sobel operator, the Laplace operator, the Mexican Hat operator, different realizations of the Cumani operator, and the Alshatti-Lambert operator. Furthermore, we present an efficient algorithm for implementing the Cumani operator. All operators have been applied to several synthetic and real images. The results are presented in this paper. We show that the quality of the results increases if Gaussian masks of larger width are used in the derivation process instead of simple 3 x 3 masks as suggested in the underlying papers. Moreover, multiresolution approaches can be applied to color images when using Gaussian masks with different standard deviations in the edge detection scheme.

Proceedings ArticleDOI
20 Jun 1995
TL;DR: This work presents preliminary results of an iterative scheme to recover the epipolar line structure from real image sequences using only the outlines of curved surfaces and shows how to recover viewer motion from frontier points for both continuous and discrete motion, calibrated and uncalibrated cameras.
Abstract: The frontier of a curved surface is the envelope of contour generators showing the boundary, at least locally, of the visible region swept out under viewer motion. In general, the outlines of curved surfaces (apparent contours) from different viewpoints are generated by different contour generators on the surface and hence do not provide a constraint on viewer motion. We show that frontier points, however, have projections which correspond to a real point on the surface and can be used to constrain viewer motion by the epipolar constraint. We show how to recover viewer motion from frontier points for both continuous and discrete motion, calibrated and uncalibrated cameras. We present preliminary results of an iterative scheme to recover the epipolar line structure from real image sequences using only the outlines of curved surfaces. A statistical evaluation as also performed to estimate the stability of the solution. >

Proceedings ArticleDOI
20 Jun 1995
TL;DR: It is demonstrated that proper modelling of degeneracy in the presence of outlier enables the detection of outliers which would otherwise be missed.
Abstract: New methods are reported for the detection of multiple solutions (degeneracy) when estimating the fundamental matrix, with specific emphasis on robustness in the presence of data contamination (outliers). The fundamental matrix can be used as a first step in the recovery of structure from motion. If the set of correspondences is degenerate then this structure cannot be accurately recovered and many solutions will explain the data equally well. It is essential that we are alerted to such eventualities. However, current feature matchers are very prone to mismatching, giving a high rate of contamination within the data. Such contamination can make a degenerate data set appear non degenerate, thus the need for robust methods becomes apparent. The paper presents such methods with a particular emphasis on providing a method that will work on real imagery and with an automated (non perfect) feature detector and matcher. It is demonstrated that proper modelling of degeneracy in the presence of outliers enables the detection of outliers which would otherwise be missed. Results using real image sequences are presented. All processing, point matching, degeneracy detection and outlier detection is automatic. >

Journal ArticleDOI
TL;DR: A neural network filter was designed and trained to detect targets in thermal infrared images and its overall performance was much superior to that of the size-matched contrast-box filter, especially in the images with higher amounts of visual clutter.
Abstract: The detection of objects in high-resolution aerial imagery has proven to be a difficult task. In the authors' application, the amount of image clutter is extremely high. Under these conditions, detection based on low-level image cues tends to perform poorly. Neural network techniques have been proposed in object detection applications due to proven robust performance characteristics. A neural network filter was designed and trained to detect targets in thermal infrared images. The feature extraction stage was eliminated and raw gray levels were utilized as input to the network. Two fundamentally different approaches were used to design the training sets. In the first approach, actual image data were utilized for training. In the second case, a model-based approach was adopted to design the training set vectors. The training set consisted of object and background data. The neuron transfer function was modified to improve network convergence and speed and the backpropagation training algorithm was used to train the network. The neural network filter was tested extensively on real image data. Receiver operating characteristic (ROC) curves were determined in each case. The detection and false alarm rates were excellent for the neural network filters. Their overall performance was much superior to that of the size-matched contrast-box filter, especially in the images with higher amounts of visual clutter. >

Proceedings ArticleDOI
20 Jun 1995
TL;DR: Instead of computing directly the 3/spl times/3 fundamental matrix, a homography with one epipole position is computed, and it is shown that this is equivalent to computing the fundamental matrix.
Abstract: The paper addresses the problem of computing the fundamental matrix which describes a geometric relationship between a pair of stereo images: the epipolar geometry. We propose a novel method based on virtual parallax. Instead of computing directly the 3/spl times/3 fundamental matrix, we compute a homography with one epipole position, and show that this is equivalent to computing the fundamental matrix. Simple equations are derived by reducing the number of parameters to estimate. As a consequence, we obtain an accurate fundamental matrix of rank two with a stable linear computation. Experiments with simulated and real images validate our method and clearly show the improvement over existing methods. >

Patent
13 Jul 1995
TL;DR: In this paper, a method for quickly and accurately obtaining surface contour information from an object without the need for precisely aligning and calibrating mechanical structures or optics elements, and without moving parts is presented.
Abstract: A device and method is provided for quickly and accurately obtaining surface contour information from an object without the need for precisely aligning and calibrating mechanical structures or optics elements, and without moving parts. In various embodiments, the invention includes steps of projecting a plurality of parallel planes of laser light through a transparent plate onto a surface of the object, receiving a reflection from the object in a digital camera, and performing image processing on the reflected image to reconstruct the three-dimensional surface of the object therefrom. The image processing includes steps of subtracting an image of the object in its non-illuminated state from an image of the object illuminated by the plurality of parallel planes of light, performing a thresholding operation on the subtracted image, and generating a line array containing lines having curvature deformations due to surface deviations on the object. The line array is transformed into a three-dimensional image by applying a transformation matrix previously obtained by processing an image of a calibration gauge placed on the transparent plate. Both single-image and multiple-image projection systems may be implemented.

Journal ArticleDOI
TL;DR: This work presents a method for integrating the high frequncy information from the shape from shading and the low frequency information from stereo, and results obtained with a variety of synthetic and real images are discussed.

Book ChapterDOI
13 Sep 1995
TL;DR: This paper considers image rotations and translations and presents algorithms for constructing invariant features and develops algorithms for recognizing several objects in a single scene without the necessity to segment the image beforehand.
Abstract: Invariant features are image characteristics which remain unchanged under the action of a transformation group. We consider in this paper image rotations and translations and present algorithms for constructing invariant features. After briefly sketching the theoretical background we develop algorithms for recognizing several objects in a single scene without the necessity to segment the image beforehand. The objects can be rotated and translated independently. Moderate occlusions are tolerable. Furthermore we show how to use these techniques for the recognition of articulated objects. The methods work directly with the gray values and do not rely on the extraction of geometric primitives like edges or corners in a preprocessing step. All algorithms have been implemented and tested both on synthetic and real image data. We present some illustrative experimental results.

Patent
19 Dec 1995
TL;DR: In this article, an imaging apparatus consisting of an imaging plate having a light receiving face, focusing device for focusing light from a subject on the image formed on the face of the imaging plate, image position displacement control device for controlling the image position displacing device, motion vector detecting device for detecting a motion vector of each image with respect to a reference image and image synthesis device for displacing pixels constituting each image and for interpolating the displaced pixels constituing each image between adjacent pixels of the reference image, thereby synthesizing the images into a single image.
Abstract: An imaging apparatus includes: an imaging plate having a light receiving face; focusing device for focusing light from a subject on the light receiving face of the imaging plate as the image formed on the light receiving face; image position displacing device for displacing a position of the image formed by the focusing device with respect to a reference position; image position displacement control device for controlling the image position displacing device; motion vector detecting device for detecting a motion vector of each image with respect to a reference image; and image synthesis device for displacing pixels constituting each image and for interpolating the displaced pixels constituting each image between adjacent pixels of the reference image, thereby synthesizing the images into a single image.

Journal ArticleDOI
TL;DR: First, the motion of the camera is compensated using a computational vision based image registration algorithm, then consecutive frames are transformed to the same coordinate system and the feature correspondence problem is solved as though tracking moving objects for a stationary camera.
Abstract: An automatic egomotion compensation based point correspondence algorithm is presented. A basic problem in autonomous navigation and motion estimation is automatically detecting and tracking features in consecutive frames, a challenging problem when camera motion is significant. In general, feature displacements between consecutive frames can be approximately decomposed into two components: (i) displacements due to camera motion which can be approximately compensated by image rotation, scaling, and translation; (ii) displacements due to object motion and/or perspective projection. In this paper, we introduce a two-step approach: First, the motion of the camera is compensated using a computational vision based image registration algorithm. Then consecutive frames are transformed to the same coordinate system and the feature correspondence problem is solved as though tracking moving objects for a stationary camera. Methods of subpixel accuracy feature matching, tracking and error analysis are introduced. The approach results in a robust and efficient algorithm. Results on several real image sequences are presented.

Journal ArticleDOI
TL;DR: Using the tensor calculus representation, the Taylor expansion of the gray-value derivatives as well as of the optical flow in a spatiotemporal neighborhood is built by providing a unifying framework for all existing local differential approaches and allows to derive new systems of equations for the estimation of the Optical flow and of its derivatives.

Patent
21 Dec 1995
TL;DR: In this paper, a pointed-position detecting apparatus where image output unit visually displays an image, and image input unit inputs the displayed visible image is stored into memory as an original image.
Abstract: A pointed-position detecting apparatus where image output unit visually displays an image, and image input unit inputs the displayed visible image. The input visible image is stored into memory as an original image. The visible image is partially changed by using a manipulation device such as a laser pointer or a pointing rod, and the visible image with the partially-changed area is inputted from the image input unit. Controller extract the changed image area from the input visible image, and detects a pointed position based on the changed image area. Thus, the phenomenon where a user's action with respect to a displayed image is reflected as a change in the image is utilized to detect coordinates and motion at the image area as the object of change.

Patent
03 Jul 1995
TL;DR: In this paper, a system that allows a user to apply image processing functions to localized regions of a photographic or negative image supplied by a photographer is presented, where the image is displayed on a touch sensitive display and the user can, by touching the display, maneuver a window to pan, zoom-in and zoom-out on particular portions of the image to designate a region to be processed.
Abstract: A system that allows a user to apply image processing functions to localized regions of a photographic or negative image supplied by a photographer. The image is displayed on a touch sensitive display and the user can, by touching the display, maneuver a window to pan, zoom-in and zoom-out on particular portions of the image to designate a region to be processed. The operator can precisely indicate where the artifact to be removed is located and will know precisely the area of the image that will be processed. Only the portion of image seen by the user in the window is processed when the user indicates a function should be applied to the image. That is, what the user sees is what is processed. The processed image can be printed or otherwise reproduced.

Proceedings ArticleDOI
01 Jul 1995
TL;DR: To improve the robustness of the recognition algorithm and to improve the accuracy to which an objects location, orientation and scale can be determined the generalised Hough transform has been replaced by the probabilistic Houghtransform.
Abstract: The recognition of shapes in images using Pairwise Geometric Histograms has previously been confined to fixed scale shape. Although the geometric representation used in this algorithm is not scale invariant, the stable behaviour of the similarity metric as shapes are scaled enables the method to be extended to the recognition of shapes over a range of scale. In this paper the necessary additions to the existing algorithm are described and the technique is demonstrated on real image data. Hypotheses generated by matching scene shape data to models have previously been resolved using the generalised Hough transform. The robustness of this method can be attributed to its approximation of maximum likelihood statistics. To further improve the robustness of the recognition algorithm and to improve the accuracy to which an objects location, orientation and scale can be determined the generalised Hough transform has been replaced by the probabilistic Hough transform.

Patent
13 Jan 1995
TL;DR: In this paper, a projected real image is derived from an image-producing data stream containing three-dimensional image cues selected from the group consisting of shading, occlusion, perspective, motion parallax, size vs depth, light (chroma value) vs depth and definition vs depth.
Abstract: Image compositing apparatus and methodology for the creation, in a defined volume of three-dimensional space, of a composite organization of plural images/visual phenomena, including at least one projected real image (104), displayed in formats including (a) front-to-rear, (b) side-by-side and (c) overlapping and intersecting, adjacency The apparatus incorporates different unique arrangements of visual sources (110, 112), and optical elements including concave reflectors (114), beam splitters (116) and image-forming/image-transmissive scrim/screen structures (334) In one important modification of the system, which does not necessarily require compositing, a projected real image is derived from an image-producing data stream containing three-dimensional image cues selected from the group consisting of shading, occlusion, perspective, motion parallax, size vs depth, light (chroma value) vs depth and definition vs depth In a further important modification of the invention, a system (620) is proposed which allows a viewer/user to interact directly with a projected real image, in a manner allowing the manipulation of one or more characteristics or aspects of the image