scispace - formally typeset
Search or ask a question

Showing papers on "Image processing published in 1993"


Book
01 Jan 1993
TL;DR: The digitized image and its properties are studied, including shape representation and description, and linear discrete image transforms, and texture analysis.
Abstract: List of Algorithms. Preface. Possible Course Outlines. 1. Introduction. 2. The Image, Its Representations and Properties. 3. The Image, Its Mathematical and Physical Background. 4. Data Structures for Image Analysis. 5. Image Pre-Processing. 6. Segmentation I. 7. Segmentation II. 8. Shape Representation and Description. 9. Object Recognition. 10. Image Understanding. 11. 3d Geometry, Correspondence, 3d from Intensities. 12. Reconstruction from 3d. 13. Mathematical Morphology. 14. Image Data Compression. 15. Texture. 16. Motion Analysis. Index.

5,451 citations


Journal ArticleDOI
TL;DR: Efficient algorithms for computing the Hausdorff distance between all possible relative positions of a binary image and a model are presented and it is shown that the method extends naturally to the problem of comparing a portion of a model against an image.
Abstract: The Hausdorff distance measures the extent to which each point of a model set lies near some point of an image set and vice versa. Thus, this distance can be used to determine the degree of resemblance between two objects that are superimposed on one another. Efficient algorithms for computing the Hausdorff distance between all possible relative positions of a binary image and a model are presented. The focus is primarily on the case in which the model is only allowed to translate with respect to the image. The techniques are extended to rigid motion. The Hausdorff distance computation differs from many other shape comparison methods in that no correspondence between the model and the image is derived. The method is quite tolerant of small position errors such as those that occur with edge detectors and other feature extraction methods. It is shown that the method extends naturally to the problem of comparing a portion of a model against an image. >

4,194 citations


Journal ArticleDOI
TL;DR: The Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, has developed a prototype interactive software system called the Spectral Image Processing System (SIPS) using IDL (the Interactive Data Language) on UNIX-based workstations to develop operational techniques for quantitative analysis of imaging spectrometer data.

2,686 citations


Journal ArticleDOI
Luc Vincent1
TL;DR: An algorithm that is based on the notion of regional maxima and makes use of breadth-first image scannings implemented using a queue of pixels results in a hybrid gray-scale reconstruction algorithm which is an order of magnitude faster than any previously known algorithm.
Abstract: Two different formal definitions of gray-scale reconstruction are presented. The use of gray-scale reconstruction in various image processing applications discussed to illustrate the usefulness of this transformation for image filtering and segmentation tasks. The standard parallel and sequential approaches to reconstruction are reviewed. It is shown that their common drawback is their inefficiency on conventional computers. To improve this situation, an algorithm that is based on the notion of regional maxima and makes use of breadth-first image scannings implemented using a queue of pixels is introduced. Its combination with the sequential technique results in a hybrid gray-scale reconstruction algorithm which is an order of magnitude faster than any previously known algorithm. >

2,064 citations


Journal ArticleDOI
TL;DR: An object recognition system based on the dynamic link architecture, an extension to classical artificial neural networks (ANNs), is presented and the implementation on a transputer network achieved recognition of human faces and office objects from gray-level camera images.
Abstract: An object recognition system based on the dynamic link architecture, an extension to classical artificial neural networks (ANNs), is presented. The dynamic link architecture exploits correlations in the fine-scale temporal structure of cellular signals to group neurons dynamically into higher-order entities. These entities represent a rich structure and can code for high-level objects. To demonstrate the capabilities of the dynamic link architecture, a program was implemented that can recognize human faces and other objects from video images. Memorized objects are represented by sparse graphs, whose vertices are labeled by a multiresolution description in terms of a local power spectrum, and whose edges are labeled by geometrical distance vectors. Object recognition can be formulated as elastic graph matching, which is performed here by stochastic optimization of a matching cost function. The implementation on a transputer network achieved recognition of human faces and office objects from gray-level camera images. The performance of the program is evaluated by a statistical analysis of recognition results from a portrait gallery comprising images of 87 persons. >

1,973 citations


Journal ArticleDOI
TL;DR: A new model for active contours based on a geometric partial differential equation that satisfies the maximum principle and permits a rigorous mathematical analysis is proposed, which enables us to extract smooth shapes and it can be adapted to find several contours simultaneously.
Abstract: We propose a new model for active contours based on a geometric partial differential equation. Our model is intrinsec, stable (satisfies the maximum principle) and permits a rigorous mathematical analysis. It enables us to extract smooth shapes (we cannot retrieve angles) and it can be adapted to find several contours simultaneously. Moreover, as a consequence of the stability, we can design robust algorithms which can be engineed with no parameters in applications. Numerical experiments are presented.

1,948 citations


Book
01 Feb 1993
TL;DR: This chapter discusses how signals in Space and Time and apertures and Arrays affect Array Processing and the role that symbols play in this processing.
Abstract: 1. Introduction 2. Signals in Space and Time 3. Apertures and Arrays 4. Conventional Array Processing 5. Detection Theory 6. Estimation Theory 7. Adaptive Array Processing 8. Tracking Appendices References List of Symbols Index.

1,933 citations


Journal ArticleDOI
TL;DR: The most effective method for image processing involves thresholding by shape as characterized by the correlation coefficient of the data with respect to a reference waveform followed by formation of a cross‐correlation image.
Abstract: Image processing strategies for functional magnetic resonance imaging (FMRI) data sets acquired using a gradient-recalled echo-planar imaging sequence are considered. The analysis is carried out using the mathematics of vector spaces. Data sets consisting of N sequential images of the same slice of brain tissue are analyzed in the time-domain and also, after Fourier transformation, in the frequency domain. A technique for thresholding is introduced that uses the shape of the response in a pixel compared with the shape of a reference waveform as the decision criterion. A method is presented to eliminate drifts in data that arise from subject movement. The methods are applied to experimental FMRI data from the motor—cortex and compared with more conventional image—subtraction methods. Several finger motion paradigms are considered in the context of the various image processing strategies. The most effective method for image processing involves thresholding by shape as characterized by the correlation coefficient of the data with respect to a reference waveform followed by formation of a cross-correlation image. Emphasis is placed not only on image formation, but also on the use of signal processing techniques to characterize the temporal response of the brain to the paradigm.

1,792 citations


Proceedings ArticleDOI
01 Sep 1993
TL;DR: This approach builds on several previous texture generation and filtering techniques but is unique because it is local, one-dimensional and independent of any predefined geometry or texture.
Abstract: Imaging vector fields has applications in science, art, image processing and special effects. An effective new approach is to use linear and curvilinear filtering techniques to locally blur textures along a vector field. This approach builds on several previous texture generation and filtering techniques[8, 9, 11, 14, 15, 17, 23]. It is, however, unique because it is local, one-dimensional and independent of any predefined geometry or texture. The technique is general and capable of imaging arbitrary two- and three-dimensional vector fields. The local one-dimensional nature of the algorithm lends itself to highly parallel and efficient implementations. Furthermore, the curvilinear filter is capable of rendering detail on very intricate vector fields. Combining this technique with other rendering and image processing techniques — like periodic motion filtering — results in richly informative and striking images. The technique can also produce novel special effects.

1,417 citations


Patent
26 Oct 1993
TL;DR: In this paper, a touch-sensitive controller has a number of semi-transparent light-diffusing panels imaged by a rear mounted imaging device such as a video camera, which is arranged to detect the shadows of objects such as fingers, touching any of the panels.
Abstract: An interactive graphics system includes a touch-sensitive controller having a number of semi-transparent light-diffusing panels imaged by a rear mounted imaging device such as a video camera. The imaging device is arranged to detect the shadows of objects, such as fingers, touching any of the panels. The camera can simultaneously to detect multiple touch points on the panels resulting from the touch of multiple fingers, which facilitates the detection of input gestures. The panel and the position on the panel touched can be determined by the position of the shadow on the video image. As the imaging device is only required to detect the existence of shadows on the panels, only a two-dimensional image must be processed. However, since the imaging device can image multiple panels simultaneously, a multi-dimensional input signal can be provided. Further, as this image is of high contrast, only light/dark areas must be differentiated for greatly simplified image processing.

1,100 citations


Journal ArticleDOI
TL;DR: This paper has reviewed, with somewhat variable coverage, the nine MR image segmentation techniques itemized in Table II; each has its merits and drawbacks.
Abstract: This paper has reviewed, with somewhat variable coverage, the nine MR image segmentation techniques itemized in Table II. A wide array of approaches have been discussed; each has its merits and drawbacks. We have also given pointers to other approaches not discussed in depth in this review. The methods reviewed fall roughly into four model groups: c-means, maximum likelihood, neural networks, and k-nearest neighbor rules. Both supervised and unsupervised schemes require human intervention to obtain clinically useful results in MR segmentation. Unsupervised techniques require somewhat less interaction on a per patient/image basis. Maximum likelihood techniques have had some success, but are very susceptible to the choice of training region, which may need to be chosen slice by slice for even one patient. Generally, techniques that must assume an underlying statistical distribution of the data (such as LML and UML) do not appear promising, since tissue regions of interest do not usually obey the distributional tendencies of probability density functions. The most promising supervised techniques reviewed seem to be FF/NN methods that allow hidden layers to be configured as examples are presented to the system. An example of a self-configuring network, FF/CC, was also discussed. The relatively simple k-nearest neighbor rule algorithms (hard and fuzzy) have also shown promise in the supervised category. Unsupervised techniques based upon fuzzy c-means clustering algorithms have also shown great promise in MR image segmentation. Several unsupervised connectionist techniques have recently been experimented with on MR images of the brain and have provided promising initial results. A pixel-intensity-based edge detection algorithm has recently been used to provide promising segmentations of the brain. This is also an unsupervised technique, older versions of which have been susceptible to oversegmenting the image because of the lack of clear boundaries between tissue types or finding uninteresting boundaries between slightly different types of the same tissue. To conclude, we offer some remarks about improving MR segmentation techniques. The better unsupervised techniques are too slow. Improving speed via parallelization and optimization will improve their competitiveness with, e.g., the k-nn rule, which is the fastest technique covered in this review. Another area for development is dynamic cluster validity. Unsupervised methods need better ways to specify and adjust c, the number of tissue classes found by the algorithm. Initialization is a third important area of research. Many of the schemes listed in Table II are sensitive to good initialization, both in terms of the parameters of the design, as well as operator selection of training data.(ABSTRACT TRUNCATED AT 400 WORDS)

Proceedings ArticleDOI
11 May 1993
TL;DR: The authors present an extension to the pyramid approach to image fusion that provides greater shift invariance and immunity to video noise, and provides at least a partial solution to the problem of combining components that have roughly equal salience but opposite contrasts.
Abstract: The authors present an extension to the pyramid approach to image fusion. The modifications address problems that were encountered with past implementations of pyramid-based fusion. In particular, the modifications provide greater shift invariance and immunity to video noise, and provide at least a partial solution to the problem of combining components that have roughly equal salience but opposite contrasts. The fusion algorithm was found to perform well for a range of tasks without requiring adjustment of the algorithm parameters. Results were remarkably insensitive to changes in these parameters, suggesting that the procedure is both robust and generic. A composite imaging technique is outlined that may provide a powerful tool for image capture. By fusing a set of images obtained under restricted, narrowband, imaging conditions, it is often possible to construct an image that has enhanced information content when compared to a single image obtained directly with a broadband sensor. >

Journal ArticleDOI
TL;DR: Accurate computation of image motion enables the enhancement of image sequences that include improvement of image resolution, filling-in occluded regions, and reconstruction of transparent objects.

Journal ArticleDOI
TL;DR: The use of continuous B-spline representations for signal processing applications such as interpolation, differentiation, filtering, noise reduction, and data compressions, and the extension of such operators for higher-dimensional signals such as digital images is considered.
Abstract: The use of continuous B-spline representations for signal processing applications such as interpolation, differentiation, filtering, noise reduction, and data compressions is considered. The B-spline coefficients are obtained through a linear transformation, which unlike other commonly used transforms is space invariant and can be implemented efficiently by linear filtering. The same property also applies for the indirect B-spline transform as well as for the evaluation of approximating representations using smoothing or least squares splines. The filters associated with these operations are fully characterized by explicitly evaluating their transfer functions for B-splines of any order. Applications to differentiation, filtering, smoothing, and least-squares approximation are examined. The extension of such operators for higher-dimensional signals such as digital images is considered. >

Patent
13 May 1993
TL;DR: In this paper, the authors used image processing to determine information about the position of a designated object in an endoscopic surgery procedure, which is particularly useful in applications where the object is difficult to view or locate.
Abstract: The present method and apparatus use image processing to determine information about the position of a designated object. The invention is particularly useful in applications where the object is difficult to view or locate. In particular, the invention is used in endoscopic surgery to determine positional information about an anatomical feature within a patient's body. The positional information is then used to position or reposition an instrument (surgical instrument) in relation to the designated object (anatomical feature). The invention comprises an instrument which is placed in relation to the designated object and which is capable of sending information about the object to a computer. Image processing methods are used to generated images of the object and determine positional information about it. This information can be used as input to robotic devices or can be rendered, in various ways (video graphics, speech synthesis), to a human user. Various input apparatus are attached to the transmitting or other used instruments to provide control inputs to the computer.

Proceedings ArticleDOI
08 Sep 1993
TL;DR: Here I show how to compute a matrix that is optimized for a particular image, and custom matrices for a number of images show clear improvement over image-independent matrices.
Abstract: This presentation describes how a vision model incorporating contrast sensitivity, contrast masking, and light adaptation is used to design visually optimal quantization matrices for Discrete Cosine Transform image compression. The Discrete Cosine Transform (DCT) underlies several image compression standards (JPEG, MPEG, H.261). The DCT is applied to 8x8 pixel blocks, and the resulting coefficients are quantized by division and rounding. The 8x8 'quantization matrix' of divisors determines the visual quality of the reconstructed image; the design of this matrix is left to the user. Since each DCT coefficient corresponds to a particular spatial frequency in a particular image region, each quantization error consists of a local increment or decrement in a particular frequency. After adjustments for contrast sensitivity, local light adaptation, and local contrast masking, this coefficient error can be converted to a just-noticeable-difference (jnd). The jnd's for different frequencies and image blocks can be pooled to yield a global perceptual error metric. With this metric, we can compute for each image the quantization matrix that minimizes the bit-rate for a given perceptual error, or perceptual error for a given bit-rate. Implementation of this system demonstrates its advantages over existing techniques. A unique feature of this scheme is that the quantization matrix is optimized for each individual image. This is compatible with the JPEG standard, which requires transmission of the quantization matrix.

Journal ArticleDOI
TL;DR: Hardware components for 3D PTV systems will be discussed, and a strict mathematical model of photogrammetric 3D coordinate determination, taking into account the different refractive indices in the optical path, will be presented.
Abstract: Particle Tracking Velocimetry (PTV) is a well-known technique for the determination of velocity vectors within an observation volume. However, for a long time it has rarely been applied because of the intensive effort necessary to measure coordinates of a large number of flow marker particles in many images. With today's imaging hardware in combination with the methods of digital image processing and digital photogrammetry, however, new possibilities have arisen for the design of completely automatic PTV systems. A powerful 3D PTV has been developed in a cooperation of the Institute of Geodesy and Photogrammetry with the Institute of Hydromechanics and Water Resources Management at the Swiss Federal Institute of Technology. In this paper hardware components for 3D PTV systems wil be discussed, and a strict mathematical model of photogrammetric 3D coordinate determination, taking into account the different refractive indices in the optical path, will be presented. The system described is capable of determining coordinate sets of some 1000 particles in a flow field at a time resolution of 25 datasets per second and almost arbitrary sequence length completely automatically after an initialization by an operator. The strict mathematical modelling of the measurement geometry, together with a thorough calibration of the system provide for a coordinate accuracy of typically 0.06 mm in X, Y and 0.18 mm in Z (depth coordinate) in a volume of 200 × 160 × 50 mm3.

Patent
30 Jul 1993
TL;DR: In this paper, a method and system for embedding signatures within visual images in both digital representation and print or film is presented. But the signature is inseparably embedded within the visible image, the signature persisting through image transforms that include resizing as well as conversion to print and back to digital form.
Abstract: A method and system for embedding signatures within visual images in both digital representation and print or film. A signature is inseparably embedded within the visible image, the signature persisting through image transforms that include resizing as well as conversion to print or film and back to digital form. Signature points are selected from among the pixels of an original image. The pixel values of the signature points and surrounding pixels are adjusted by an amount detectable by a digital scanner. The adjusted signature points form a digital signature which is stored for future identification of subject images derived from the image. In one embodiment, a signature is embedded within an image by locating relative extrema in the continuous space of pixel values and selecting the signature points from among the extrema. Preferably, the signature is redundantly embedded in the image such that any of the redundant representations can be used to identify the signature. Identification of a subject image includes ensuring that the subject image is normalized with respect to the original image or the signed image. Preferably, the normalized subject image is compared with the stored digital signature.

Book
01 Jan 1993
TL;DR: The Asymptotic minimax approach as mentioned in this paper is based on nonparametric regression and non-parametric change-point analysis, which is used to find the image estimators which achieve the best order of accuracy for the worst images in a particular functional class.
Abstract: Image processing is an increasingly important area of research and there exists a large variety of image reconstruction methods proposed by different authors. This book is concerned with a technique for image reconstruction known as the asymptotic minimax approach, which is based on non-parametric regression and non-parametric change-point analysis. In effect, the central idea is to assume that the image under analysis belongs to a certain functional class and the method finds the image estimators which achieve the best order of accuracy for the worst images in that class. The first two chapters present the basic ideas required from non-parametric regression and change-point analysis whilst the subsequent chapters develop the main theory and examples of applications. In order to provide a relatively simple account of this method, the authors' emphasis is to present results under the simplest assumptions which still allow the main features of a particular problem. As a result the book is essentially self-contained, although it does assume a firm grounding in functional analysis, statistics and image processing fundamentals.

Journal ArticleDOI
TL;DR: In this article, a multiscale representation of grey-level shape called the scale-space primal sketch is presented, which makes explicit both features in scale space and the relations between structures at different scales, and a methodology for extracting significant blob-like image structures from this representation.
Abstract: This article presents: (i) a multiscale representation of grey-level shape called the scale-space primal sketch, which makes explicit both features in scale-space and the relations between structures at different scales, (ii) a methodology for extracting significant blob-like image structures from this representation, and (iii) applications to edge detection, histogram analysis, and junction classification demonstrating how the proposed method can be used for guiding later-stage visual processes. The representation gives a qualitative description of image structure, which allows for detection of stable scales and associated regions of interest in a solely bottom-up data-driven way. In other words, it generates coarse segmentation cues, and can hence be seen as preceding further processing, which can then be properly tuned. It is argued that once such information is available, many other processing tasks can become much simpler. Experiments on real imagery demonstrate that the proposed theory gives intuitive results.

Journal ArticleDOI
01 Dec 1993
TL;DR: This paper shows how simple and parallel techniques can be combined to achieve this goal and deal with complex real world scenes and shows that the algorithm relies on correlation followed by interpolation and performs very well on difficult images such as faces and cluttered ground level scenes.
Abstract: To compute reliable dense depth maps, a stereo algorithm must preserve depth discontinuities and avoid gross errors. In this paper, we show how simple and parallel techniques can be combined to achieve this goal and deal with complex real world scenes. Our algorithm relies on correlation followed by interpolation. During the correlation phase the two images play a symmetric role and we use a validity criterion for the matches that eliminate gross errors: at places where the images cannot be correlated reliably, due to lack of texture of occlusions for example, the algorithm does not produce wrong matches but a very sparse disparity map as opposed to a dense one when the correlation is successful. To generate a dense depth map, the information is then propagated across the featureless areas, but not across discontinuities, by an interpolation scheme that takes image grey levels into account to preserve image features. We show that our algorithm performs very well on difficult images such as faces and cluttered ground level scenes. Because all the algorithms described here are parallel and very regular they could be implemented in hardware and lead to extremely fast stereo systems.

Journal ArticleDOI
TL;DR: A theory for the system inherent amplification factor dependence on the distance between individual measurement points and detector is proposed, and correction measures are presented.
Abstract: A laser Doppler perfusion imaging technique based on dynamic light scattering in tissue is reported. When a laser beam sequentially scans the tissue (maximal area approximately 12 cm*12 cm), moving blood cells generate Doppler components in the backscattered light. A fraction of this light is detected by a remote photodiode and converted into an electrical signal. In the signal processor, a signal proportional to the tissue perfusion at each measurement point is calculated and stored. When the scanning procedure is completed, the system generates a color-coded perfusion image on a monitor. A perfusion image is typically built up of data from 4096 measurement sites, recorded during a time period of 4 min. This image has a spatial resolution of about 2 mm. A theory for the system inherent amplification factor dependence on the distance between individual measurement points and detector is proposed and correction measures are presented. Performance results for the laser Doppler perfusion imager obtained with a flow simulator are presented. The advantages of the method are discussed. >

Journal ArticleDOI
TL;DR: This paper considers the task of detection of a weak signal in a noisy image and suggests the Hotelling model with channels as a useful model observer for the purpose of assessing and optimizing image quality with respect to simple detection tasks.
Abstract: Image quality can be defined objectively in terms of the performance of some "observer" (either a human or a mathematical model) for some task of practical interest. If the end user of the image will be a human, model observers are used to predict the task performance of the human, as measured by psychophysical studies, and hence to serve as the basis for optimization of image quality. In this paper, we consider the task of detection of a weak signal in a noisy image. The mathematical observers considered include the ideal Bayesian, the nonprewhitening matched filter, a model based on linear-discriminant analysis and referred to as the Hotelling observer, and the Hotelling and Bayesian observers modified to account for the spatial-frequency-selective channels in the human visual system. The theory behind these observer models is briefly reviewed, and several psychophysical studies relating to the choice among them are summarized. Only the Hotelling model with channels is mathematically tractable in all cases considered here and capable of accounting for all of these data. This model requires no adjustment of parameters to fit the data and is relatively insensitive to the details of the channel mechanism. We therefore suggest it as a useful model observer for the purpose of assessing and optimizing image quality with respect to simple detection tasks.

Journal ArticleDOI
TL;DR: The reconstruction of images from incomplete block discrete cosine transform (BDCT) data is examined and two methods are proposed for solving this regularized recovery problem based on the theory of projections onto convex sets (POCS) and the constrained least squares (CLS).
Abstract: The reconstruction of images from incomplete block discrete cosine transform (BDCT) data is examined. The problem is formulated as one of regularized image recovery. According to this formulation, the image in the decoder is reconstructed by using not only the transmitted data but also prior knowledge about the smoothness of the original image, which complements the transmitted data. Two methods are proposed for solving this regularized recovery problem. The first is based on the theory of projections onto convex sets (POCS) while the second is based on the constrained least squares (CLS) approach. For the POCS-based method, a new constraint set is defined that conveys smoothness information not captured by the transmitted BDCT coefficients, and the projection onto it is computed. For the CLS method an objective function is proposed that captures the smoothness properties of the original image. Iterative algorithms are introduced for its minimization. Experimental results are presented. >

Book
01 May 1993
TL;DR: Digital image processing fundamentals digital image transfor algorithms digital image filtering digital image compression edge detection algorithms image segmentation algorithms shape description.
Abstract: Digital image processing fundamentals digital image transfor algorithms digital image filtering digital image compression edge detection algorithms image segmentation algorithms shape description.

Journal ArticleDOI
TL;DR: The present work demonstrates the underlying theory, showing how the principles can be applied to measurements on standard fluorescent beads and changes in distribution of receptors for platelet-derived growth factor on human foreskin fibroblasts.

Proceedings ArticleDOI
28 Oct 1993
TL;DR: The motivation behind the use of higher-order spectra (HOS) in signal processing as well as the definitions, properties, and biomedica1 signal processing applications of higher order spectra are presented.
Abstract: Absltacl The purpose of this keynote lecture of the Signal Analysis Track is U) present the motivation behind he use of higher-order spectra (HOS) in signal processing as well as the definitions, properties, and biomedica1 signal processing applications of higher-order spectra. This lecture will also emphasize the state of science of the higher-order spectra field, especially as it applies to non-stadonary signal analysis.

Patent
15 Sep 1993
TL;DR: In this paper, four modulation schemes including digital pulse width modulation, phase contrast modulation, full complex modulation, and analog modulation are discussed for image simulation. But, the phase contrast and full complex modulations have the capability to produce phase information within the image.
Abstract: An image simulation system 20 for testing sensor systems 26 and for training image sensor personnel wherein synthetic image data is generated by a scene generator 21 and projected by an image projector 23. The image projector 23 uses a digital micromirror device array 27 to modulate the incident energy and create an image. Four modulation schemes are discussed including digital pulse-width modulation, phase contrast modulation, full complex modulation, and analog modulation. The digital pulse width modulation technique will typically require synchronizing the image sensor and the image projector. Phase contrast modulation, full complex modulation, and analog modulation do not require synchronizing the image projector 23 and the sensor system 26. Phase contrast modulation and full complex modulation have the capability to produce phase information within the image. The image simulation system 20 can produce high contrast images and is more flexible than prior art system.

Journal ArticleDOI
TL;DR: In this article, a model for data acquired with the use of a charge-coupled-device camera is given and then used for developing a new iterative method for restoring intensities of objects observed with such a camera.
Abstract: A model for data acquired with the use of a charge-coupled-device camera is given and is then used for developing a new iterative method for restoring intensities of objects observed with such a camera. The model includes the effects of point spread, photoconversion noise, readout noise, nonuniform flat-field response, nonuniform spectral response, and extraneous charge carriers resulting from bias, dark current, and both internal and external background radiation. An iterative algorithm is identified that produces a sequence of estimates converging toward a constrained maximum-likelihood estimate of the intensity distribution of an imaged object. An example is given for restoring images from data acquired with the use of the Hubble Space Telescope.

Patent
08 Sep 1993
TL;DR: In this article, a 3D human interface apparatus using a motion recognition based on a dynamic image processing in which the motion of an operator operated object as an imaging target can be recognized accurately and stably.
Abstract: A 3D human interface apparatus using a motion recognition based on a dynamic image processing in which the motion of an operator operated object as an imaging target can be recognized accurately and stably. The apparatus includes: an image input unit for entering a plurality of time series images of an object operated by the operator into a motion representing a command; a feature point extraction unit for extracting at least four feature points including at least three reference feature points and one fiducial feature point on the object, from each of the images; a motion recognition unit for recognizing the motion of the object by calculating motion parameters, according to an affine transformation determined from changes of positions of the reference feature points on the images, and a virtual parallax for the fiducial feature point expressing a difference between an actual position change on the images and a virtual position change according to the affine transformation; and a command input unit for inputting the command indicated by the motion of the object recognized by the motion recognition unit.