scispace - formally typeset
Search or ask a question

Showing papers on "Image processing published in 1988"


Book
03 Oct 1988
TL;DR: This chapter discusses two Dimensional Systems and Mathematical Preliminaries and their applications in Image Analysis and Computer Vision, as well as image reconstruction from Projections and image enhancement.
Abstract: Introduction. 1. Two Dimensional Systems and Mathematical Preliminaries. 2. Image Perception. 3. Image Sampling and Quantization. 4. Image Transforms. 5. Image Representation by Stochastic Models. 6. Image Enhancement. 7. Image Filtering and Restoration. 8. Image Analysis and Computer Vision. 9. Image Reconstruction From Projections. 10. Image Data Compression.

8,504 citations


Journal ArticleDOI
TL;DR: This paper presents a survey of thresholding techniques and attempts to evaluate the performance of some automatic global thresholding methods using the criterion functions such as uniformity and shape measures.
Abstract: In digital image processing, thresholding is a well-known technique for image segmentation. Because of its wide applicability to other areas of the digital image processing, quite a number of thresholding methods have been proposed over the years. In this paper, we present a survey of thresholding techniques and update the earlier survey work by Weszka (Comput. Vision Graphics & Image Process 7, 1978 , 259–265) and Fu and Mu (Pattern Recognit. 13, 1981 , 3–16). We attempt to evaluate the performance of some automatic global thresholding methods using the criterion functions such as uniformity and shape measures. The evaluation is based on some real world images.

2,771 citations


Journal ArticleDOI
TL;DR: This survey will provide a useful guide to quickly acquaint researchers with the main literature in this research area and it seems likely that the Hough transform will be an increasingly used technique.
Abstract: We present a comprehensive review of the Hough transform, HT, in image processing and computer vision. It has long been recognized as a technique of almost unique promise for shape and motion analysis in images containing noisy, missing, and extraneous data but its adoption has been slow due to its computational and storage complexity and the lack of a detailed understanding of its properties. However, in recent years much progress has been made in these areas. In this review we discuss ideas for the efficient implementation of the HT and present results on the analytic and empirical performance of various methods. We also report the relationship of Hough methods and other transforms and consider applications in which the HT has been used. It seems likely that the HT will be an increasingly used technique and we hope that this survey will provide a useful guide to quickly acquaint researchers with the main literature in this research area.

2,099 citations


Journal ArticleDOI
01 Jun 1988
TL;DR: A technique for rendering images of volumes containing mixtures of materials is presented, which allows both the interior of a material and the boundary between materials to be colored.
Abstract: A technique for rendering images of volumes containing mixtures of materials is presented. The shading model allows both the interior of a material and the boundary between materials to be colored. Image projection is performed by simulating the absorption of light along the ray path to the eye. The algorithms used are designed to avoid artifacts caused by aliasing and quantization and can be efficiently implemented on an image computer. Images from a variety of applications are shown.

1,702 citations


01 Jan 1988
TL;DR: A general overview of VLSI array processors is provided and a unified treatment from algorithm, architecture, and application perspectives is provided.
Abstract: High speed signal processing depends critically on parallel processor technology. In most applications, general-purpose parallel computers cannot offer satisfactory real-time processing speed due to severe system overhead. Therefore, for real-time digital signal processing (DSP) systems, special-purpose array processors have become the only appealing alternative. In designing or using such array Processors, most signal processing algorithms share the critical attributes of regularity, recursiveness, and local communication. These properties are effectively exploited in innovative systolic and wavefront array processors. These arrays maximize the strength of very large scale integration (VLSI) in terms of intensive and pipelined computing, and yet circumvent its main limitation on communication. The application domain of such array processors covers a very broad range, including digital filtering, spectrum estimation, adaptive array processing, image/vision processing, and seismic and tomographic signal processing, This article provides a general overview of VLSI array processors and a unified treatment from algorithm, architecture, and application perspectives.

1,249 citations


Journal ArticleDOI
TL;DR: The hierarchical chamfer matching algorithm matches edges by minimizing a generalized distance between them in a hierarchical structure, i.e. in a resolution pyramid, which reduces the computational load significantly.
Abstract: The algorithm matches edges by minimizing a generalized distance between them. The matching is performed in a series of images depicting the same scene with different resolutions, i.e. in a resolution pyramid. Using this hierarchical structure reduces the computational load significantly. The algorithm is reasonably simple to implement and is insensitive to noise and other disturbances. The algorithm has been tested in several applications. Two of them are briefly presented. In the first application the outlines of common tools are matched to gray-level images of the same tools, with overlapping. In the second application lake edges from aerial photographs are matched to lake edges from a map, with translation, rotation, scale, and perspective changes. The hierarchical chamfer matching algorithm gives correct results using a reasonable amount of computational resources in all tested applications. >

1,206 citations


Book
18 Feb 1988
TL;DR: Computer processing of remote-sensed images, Computer processing of remotely-sensing images, and so on.
Abstract: Computer processing of remotely-sensed images , Computer processing of remotely-sensed images , مرکز فناوری اطلاعات و اطلاع رسانی کشاورزی

828 citations


Journal ArticleDOI
TL;DR: The architecture of the edge detector presented is highly pipeline to perform the computations of gradient magnitude and direction for the output image samples and has been demonstrated with a prototype system that is performing image edge detection in real time.
Abstract: The architecture of the edge detector presented is highly pipeline to perform the computations of gradient magnitude and direction for the output image samples The chip design is based on a 2- mu m, double-metal, CMOS technology and was implemented using a silicon compiler system in less than 2 man-months It is designed to operate with a 10-MHz two-phase clock, and it performs approximately 200*10/sup 6/ additions/s to provide the required magnitude and direction outputs every clock cycle The function of the chip has been demonstrated with a prototype system that is performing image edge detection in real time >

743 citations


Journal ArticleDOI
TL;DR: The development and implementation of an algorithm for automated text string separation that is relatively independent of changes in text font style and size and of string orientation are described and showed superior performance compared to other techniques.
Abstract: The development and implementation of an algorithm for automated text string separation that is relatively independent of changes in text font style and size and of string orientation are described. It is intended for use in an automated system for document analysis. The principal parts of the algorithm are the generation of connected components and the application of the Hough transform in order to group components into logical character strings that can then be separated from the graphics. The algorithm outputs two images, one containing text strings and the other graphics. These images can then be processed by suitable character recognition and graphics recognition systems. The performance of the algorithm, both in terms of its effectiveness and computational efficiency, was evaluated using several test images and showed superior performance compared to other techniques. >

664 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a survey of the state-of-the-art in remote sensing, including the following: 1.1 Introduction.2.2 Histogram matching.3.3 Radiometric resolution.4.4 Atmospheric correction.
Abstract: Preface to the First Edition.Preface to the Second Edition .Preface to the Third Edition.List of Examples.1. Remote Sensing: Basic Principles.1.1 Introduction.1.2 Electromagnetic radiation and its properties.1.2.1 Terminology.1.2.2 Nature of electromagnetic radiation.1.2.3 The electromagnetic spectrum.1.2.4 Sources of electromagnetic radiation.1.2.5 Interactions with the Earth's atmosphere.1.3 Interaction with Earth-surface materials.1.3.1 Introduction.1.3.2 Spectral reflectance of Earth surface materials.1.3.2.1 Vegetation.1.3.2.2 Geology.1.3.2.3 Water bodies.1.3.2.4 Soils.1.4 Summary.2. Remote Sensing Platforms and Sensors.2.1 Introduction.2.2 Characteristics of imaging remote sensing instruments.2.2.1 Spatial resolution.2.2.2 Spectral resolution.2.2.3 Radiometric resolution.2.3 Optical, near-infrared and thermal imaging sensors.2.3.1 Along-Track Scanning Radiometer (ATSR).2.3.2 Advanced Very High Resolution Radiometer (AVHRR).2.3.3 MODIS (MODerate Resolution Imaging Spectrometer).2.3.4 Ocean observing instruments.2.3.5 IRS-1 LISS.2.3.6 Landsat Instruments.2.3.6.1 Landsat Multi-spectral Scanner (MSS).2.3.6.2 Landsat Thematic Mapper (TM).2.3.6.3 Enhanced Thematic Mapper Plus (ETM+).2.3.6.4 Landsat follow-on programme.2.3.7 SPOT sensors.2.3.7.1 SPOT High Resolution Visible (HRV).2.3.7.2 Vegetation (VGT).2.3.7.3 SPOT follow-on programme.2.3.8 Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER).2.3.9 High-resolution commercial and micro-satellite systems.2.3.9.1 High-resolution commercial satellites - IKONOS.2.3.9.2 High-resolution commercial satellites - QuickBird.2.4 Microwave imaging sensors.2.4.1 ERS SAR.2.4.2 RADARSAT.2.5 Summary.3. Hardware and Software Aspects of Digital Image Processing.3.1 Introduction.3.2 Properties of digital remote sensing data.3.2.1 Digital data.3.2.2 Data formats.3.2.3 System processing.3.3 MIPS software.3.3.1 Installing MIPS.3.3.2 Using MIPS.3.3.3 Summary of MIPS functions.3.4 Summary.4. Pre-processing of Remotely Sensed Data.4.1 Introduction.4.2 Cosmetic operations.4.2.1 Missing scan lines.4.2.2 De-striping methods.4.2.2.1 Linear method.4.2.2.2 Histogram matching.4.2.2.3 Other destriping methods.4.3 Geometric correction and registration.4.3.1 Orbital geometry model.4.3.2 Transformation based on ground control points.4.3.3 Resampling procedures.4.3.4 Image registration.4.3.5 Other geometric correction methods.4.4 Atmospheric correction.4.4.1 Background.4.4.2 Image-based methods.4.4.3 Radiative transfer models.4.4.4 Empirical line method.4.5 Illumination and view angle effects.4.6 Sensor calibration.4.7 Terrain effects.4.8 Summary.5. Image Enhancement Techniques.5.1 Introduction.5.2 Human visual system.5.3 Contrast enhancement.5.3.1 Linear contrast stretch.5.3.2 Histogram equalisation.5.3.3 Gaussian Stretch.5.4 Pseudocolour enghancement.5.4.1 Density slicing.5.4.2 Pseudocolour transform.5.5 Summary.6. Image Transforms.6.1 Introduction.6.2 Arithmetic operations.6.2.1 Image addition.6.2.2 Image subtraction.6.2.3 Image multiplication.6.2.4 Image division and vegetation ratios.6.3 Empirically based image transforms.6.3.1 Perpendicular Vegetation Index.6.3.2 Tasselled Cap (Kauth-Thomas) transformation.6.4 Principal Components Analysis.6.4.1 Standard Principal Components Analysis.6.4.2 Noise-adjusted Principal Components Analysis.6.4.3 Decorrelation stretch.6.5 Hue, Saturation and Intensity (HIS) transform.6.6 The Discrete Fourier Transform.6.6.1 Introduction.6.6.2 Two-dimensional DFT.6.6.3 Applications.6.7 The Discrete Wavelet Transform.6.7.1 Introduction.6.7.2 The one-dimensional Discrete Wavelet Transform.6.7.3 The two-dimensional Discrete Wavelet Transform.6.8 Summary.7. Filtering Techniques.7.1 Introduction.7.2 Spatial domain low-pass (smoothing) filters.7.2.1 Moving average filter.7.2.2 Median filter.7.2.3 Adaptive filters.7.3 Spatial domain high-pass (sharpening) filters.7.3.1 Image subtraction method.7.3.2 Derivative-based methods.7.4 Spatial domain edge detectors.7.5 Frequency domain filters.7.6 Summary.8. Classification.8.1 Introduction.8.2 Geometrical basis of classification.8.3 Unsupervised classification.8.3.1 The k-means algorithm.8.3.2 ISODATA.8.3.3 A modified k-means algorithm.8.4 Supervised classification.8.4.1 Training samples.8.4.2 Statistical classifiers.8.4.2.1 Parallelepiped classifier.8.4.2.2 Centroid (k-means) classifier.8.4.2.3 Maximum likelihood method.8.4.3 Neural classifiers.8.5 Fuzzy classification and linear spectral unmixing.8.5.1 The linear mixture model.8.5.2 Fuzzy classifiers.8.6 Other approaches to image classification.8.7 Incorporation of non-spectral features.8.7.1 Texture.8.7.2 Use of external data.8.8 Contextual information.8.9 Feature selection.8.10 Classification accuracy.8.11 Summary.9. Advanced Topics.9.1 Introduction.9.2 SAR Interferometry.9.2.1 Basic principles.9.2.3 Interferometric processing.9.2.3 Problems in SAR interferometry.9.2.4 Applications of SAR interferometry.9.3 Imaging spectrometry.9.3.1 Introduction.9.3.2 Processing imaging spectrometer data.9.3.2.1 Derivative analysis.9.3.2.2 Smoothing and denoising the reflectance spectrum Savitzky-Golay polynomial smoothing Denoising using the Discrete Wavelet Transform.9.3.2.3 Determinationof 'red edge' characteristics of vegetation.9.3.2.4 Continuum removal.9.4 Lidar.9.4.1 Introduction.9.4.2 Lidar details.9.4.3 Lidar applications.Appendix A: Description of Sample Image Data Sets.References.Index.

624 citations


Book
01 Jan 1988
TL;DR: The mathematics of image formation and image processing: The concept of object and image The relationship between object and images The general image processing problem Discrete Fourier representation and the models for imaging systems The general theory of image restoration.
Abstract: Introduction - and some challenging questions. In the beginning. Diagnostic radiology with x-rays: Introduction The imaging system and image formation Photon interactions Important physical parameters X-ray tubes Image receptors Digital radiology. Quality assurance and image improvement in diagnostic radiology with x-rays. Introduction to quality assurance: Basic quality-assurance tests for x-ray sets Specific quality-assurance tests Data collection and presentation of the results Summary of quality assurance Improvement in radiographic quality Scatter removal Contrast enhancement Summary of methods of image enhancement. X-ray transmission computed tomography: The need for sectional images The principles of sectional imaging Fourier-based solutions: The method of convolution and backprojection Iterative methods of reconstruction Other considerations. Clinical applications of X-ray computed tomography in radiotherapy planning: X-ray computed tomography scanners and their role in planning Non-standard computed tomography scanners. The physics of radioisotope imaging: Introduction Radiation detectors Radioisotope imaging equipment Radionuclides for imaging The role of computers in radioisotope imaging Static and dynamic planar scintigraphy Emission computed tomography Quality control and performance assessment of radioisotope imaging equipment Clinical applications of radioisotope imaging. Diagnostic Ultrasound: Introduction Basic physics Engineering principles of ultrasonic imaging Clinical applications and biological aspects of diagnostic ultrasound Research topics. Spatially localised nuclear magnetic resonance: Introduction The development of nuclear magnetic resonance Principles of nuclear magnetic resonance Nuclear magnetic resonance pulse sequences Relaxation processes and their measurement Nuclear magnetic resonance image acquisition and reconstruction Spatially localised spectroscopy Instrumentation Nuclear magnetic resonance safety. Physical aspects of infrared imaging: Introduction Infrared photography Transilluminaton Infrared imaging Liquid-crystal thermography Microwave thermography. Imaging of tissue electrical impedance: The electrical behaviour of tissue Tissue impedance imaging Suggested clinical applications of applied potential tomography. Imaging by diaphanography: Clinical applications Physical basis of transillumination Experimental arrangements. The mathematics of image formation and image processing: The concept of object and image The relationship between object and image The general image processing problem Discrete Fourier representation and the models for imaging systems The general theory of image restoration Image sampling Two examples of image processing from modern clinical practice Iterative image processing. Perception and interpretation of images. Introduction The eye and brain as a stage in an imaging system Spatial and contrast resolution Perception of moving images Quantitative measures of investigative performance. Computer requirements of imaging systems: Single- versus multi-user systems Generation and transfer of images Processing speed Display of medical images Three-dimensional image display: methodology Three-dimensional image display: clinical applications. Epilogue: Introduction The impact of radiation hazard on medical imaging practice Attributes and relative roles of imaging modalities References. Index.

Book
01 Jan 1988
TL;DR: This book progresses rapidly through the fundamentals to advanced topics such as iterative least squares design of IIR filters, inverse filters, power spectral estimation, and multidimensional applications--all in one concise volume.
Abstract: An Introduction to Digital Signal Processing is written for those who need to understand and use digital signal processing and yet do not wish to wade through a multi-semester course sequence. Using only calculus-level mathematics, this book progresses rapidly through the fundamentals to advanced topics such as iterative least squares design of IIR filters, inverse filters, power spectral estimation, and multidimensional applications--all in one concise volume.


Journal ArticleDOI
TL;DR: It is shown that there exists a tradeoff between the number of frequency components used per position and thenumber of such clusters (sampling rate) utilized along the spatial coordinate.
Abstract: A scheme suitable for visual information representation in a combined frequency-position space is investigated through image decomposition into a finite set of two-dimensional Gabor elementary functions (GEF) The scheme is generalized to account for the position-dependent Gabor-sampling rate, oversampling, logarithmic frequency scaling and phase-quantization characteristic of the visual system Comparison of reconstructed signal highlights the advantages of the generalized Gabor scheme in coding typical bandlimited images It is shown that there exists a tradeoff between the number of frequency components used per position and the number of such clusters (sampling rate) utilized along the spatial coordinate >

Proceedings ArticleDOI
05 Jun 1988
TL;DR: The subpixel registration allows image enhancement with respect to improved resolution and noise cleaning and is particularly useful for image sequences taken from an aircraft or satellite where images in a sequence differ mostly by translation and rotation.
Abstract: Given a sequence of images taken from a moving camera, they are registered with subpixel accuracy in respect to translation and rotation. The subpixel registration allows image enhancement with respect to improved resolution and noise cleaning. Both the registration and the enhancement procedures are described. The methods are particularly useful for image sequences taken from an aircraft or satellite where images in a sequence differ mostly by translation and rotation. In these cases, the process results in images that are stable, clean, and sharp. >

Journal ArticleDOI
01 Oct 1988
TL;DR: A new approach to real-time machine vision in dynamic scenes is presented based on special hardware and methods for feature extraction and information processing using integral spatio-temporal models that by-passes the nonunique inversion of the perspective projection by applying recursive least squares filtering.
Abstract: A new approach to real-time machine vision in dynamic scenes is presented based on special hardware and methods for feature extraction and information processing. Using integral spatio-temporal models, it by-passes the nonunique inversion of the perspective projection by applying recursive least squares filtering. By prediction error feedback methods similar to those used in modern control theory, all spatial state variables including the velocity components are estimated. Only the last image of the sequence needs to be evaluated, thereby alleviating the real-time image sequence processing task.

Journal ArticleDOI
TL;DR: Results of these experiments show that for this particular diagnostic task, there was no significant difference in the ability of the two methods to depict luminance contrast; thus, further evaluation of AHE using controlled clinical trials is indicated.
Abstract: Adaptive histogram equalization (AHE) and intensity windowing have been compared using psychophysical observer studies Experienced radiologists were shown clinical CT (computerized tomographic) images of the chest Into some of the images, appropriate artificial lesions were introduced; the physicians were then shown the images processed with both AHE and intensity windowing They were asked to assess the probability that a given image contained the artificial lesion, and their accuracy was measured The results of these experiments show that for this particular diagnostic task, there was no significant difference in the ability of the two methods to depict luminance contrast; thus, further evaluation of AHE using controlled clinical trials is indicated >

Journal ArticleDOI
TL;DR: In this article, a two-dimensional least-mean-square (TDLMS) adaptive algorithm based on the method of steepest decent is proposed and applied to noise reduction in images.
Abstract: A two-dimensional least-mean-square (TDLMS) adaptive algorithm based on the method of steepest decent is proposed and applied to noise reduction in images. The adaptive property of the TDLMS algorithm enables the filter to have an improved tracking performance in nonstationary images. The results presented show that the TDLMS algorithm can be used successfully to reduce noise in images. The algorithm complexity is 2(N*N) multiplications and the same number of additions per image sample, where N is the parameter-matrix dimension. Analysis and convergence properties of the LMS algorithm in the one-dimensional case presented by other authors is shown to be applicable to this algorithm. The algorithm can be used in a number of two-dimensional applications such as image enhancement and image data processing. >

Book
01 Aug 1988
TL;DR: The importance of images and why measure images is explained and computer methods, including automatic thresholding and binary image editing, are explained.
Abstract: 1 Introduction.- The importance of images.- Why measure images?.- Computer methods: an overview.- Implementation.- Acquisition and processing of images.- Measurements within images.- More than two dimensions.- 2 Acquiring Images.- Image sources.- Multi-spectral images.- Image sensors.- Digitization.- Specifications.- References.- 3 Image Processing.- Point operations.- Time sequences.- Correcting image defects - averaging to reduce noise.- Reducing noise in a single image.- Frequency space.- Color images.- Shading correction.- Fitting backgrounds.- Rubber sheeting.- Image sharpening.- Focussing images.- References.- 4 Segmentation of Edges and Lines.- Defining a feature and its boundary.- Roberts' cross edge operator.- The Sobel and Kirsch operators.- Other edge-finding methods.- Other segmentation methods.- The Hough transform.- Touching features.- Manual outlining.- References.- 5 Discrimination and Thresholding.- Brightness thresholds.- Thresholding after processing.- Selecting threshold settings.- The need for automatic thresholding.- Automatic methods.- Histogram minimum method.- Minimum area sensitivity method 1ll.- Minimum perimeter sensitivity method.- Reproducibility testing.- Fixed percentage setting.- Color images.- Encoding binary images.- Contiguity.- References.- 6 Binary Image Editing.- Manual editing.- Combining images.- Neighbor operations.- Skeletonization.- Measurement using binary image editing.- Covariance.- Watershed segmentation.- Mosaic amalgamation and fractal dimensions.- Contiguity and filling interior holes.- References.- 7 Image Measurements.- Reference areas.- Boundary curvature.- Feature measurements.- Perimeter points.- Length and breadth.- Radius of curvature.- Image processing approaches.- Counting neighbor patterns.- Shape.- Corners as a measure of shape.- Harmonic analysis.- Position.- Neighbor relationships.- Edge effects.- Brightness.- References.- 8 Stereological Interpretation of Measurement Data.- Global measurements.- Global parameters.- Mean free path.- Problems in 3-D interpretation.- Feature specific measurements.- Distribution histograms of size.- Interpreting distributions.- Nonparametric tests.- Cumulative plots.- Plotting shape and position data.- Other plots.- References.- 9 Object Recognition.- Locating features.- Parametric object description.- Distinguishing populations.- Decision points.- Other identification methods.- An example.- Comparing multiple populations.- An example of contextual learning.- Other applications.- Artificial intelligence.- References.- 10 Surface Image Measurements.- Depth cues.- Image contrast.- Shape from texture.- The scanning electron microscope.- Line width measurement.- Roughness and fractal dimensions.- Other surface measurement methods.- References.- 11 Stereoscopy.- Principles from human vision.- Measurement of elevation from parallax.- Presentation of the data.- Automatic fusion.- Stereoscopy in transparent volumes.- References.- 12 Serial Sections.- Obtaining serial section images.- Optical sectioning.- Presentation of 3-D image information.- Aligning slices.- Displays of outline images.- Surface modelling.- Measurements on surface-modelled objects.- Voxel displays.- Measurements on voxel images.- Network analysis.- Connectivity.- References.- 13 Tomography.- Reconstruction.- Instrumentation.- 3-D Imaging.- References.- 14 Lessons from Human Vision.- The language of structure.- Illusion.- Conclusion.- References.- For further reading.

Journal ArticleDOI
TL;DR: Detecting building structures in aerial images by using a generic model of the shapes of the structures looking for — that they are rectangular or composed of rectangular components to confirm their presence and to estimate their height.
Abstract: Detecting building structures in aerial images is a task of importance for many applications. Low-level segmentation rarely gives a complete outline of the desired structures. We use a generic model of the shapes of the structures we are looking for — that they are rectangular or composed of rectangular components. We also use shadows cast by buildings to confirm their presence and to estimate their height. Our techniques have been tested on images with density typical of suburban areas.

Journal ArticleDOI
TL;DR: In this article, a study of some of the basic physical performance characteristics of an advanced commercial positron-emission tomograph (PET) is reported. But the performance of the PET was not analyzed.
Abstract: A study of some of the basic physical performance characteristics of an advanced commercial positron-emission tomograph (PET) is reported. It consists of eight rings of BGO (bismuth germanate) detectors (512/ring) with removable interring septa and retractable transmission ring sources for each ring. The (transaxial) detector width is 5.6 mm, and the lower limit of the spatial resolution in the reconstructed image is about 5 mm. Investigations were carried out of count-rate linearity, image count recovery with object size, the effect of attenuation correction on quantitation, and scatter fraction. >

Journal ArticleDOI
TL;DR: Two locally sensitive transformation functions are proposed for image registration by the weighted least-squares method and the local weighted mean method.

Journal ArticleDOI
TL;DR: In this paper, a solution to the correspondence problem for stereopsis is proposed using the differences in the complex phase of local spatial frequency components, which can discriminate disparities significantly smaller than the width of a pixel.
Abstract: A solution to the correspondence problem for stereopsis is proposed using the differences in the complex phase of local spatial frequency components. One-dimensional spatial Gabor filters (Gabor 1946; Marcelja 1980), at different positions and spatial frequencies are convolved with each member of a stereo pair. The difference between the complex phase at corresponding points in the two images is used to find the stereo disparity. Disparity values are combined across spatial frequencies for each image location. Three-dimensional depth maps have been computed from real images under standard lighting conditions, as well as from random-dot stereograms (Julesz 1971). The algorithm can discriminate disparities significantly smaller than the width of a pixel. It is possible that a similar mechanism might be used in the human visual system.

Journal ArticleDOI
TL;DR: Algorithms based on minimization of compactness and of fuzziness are developed whereby it is possible to obtain both fuzzy and nonfuzzy (thresholded) versions of an ill-defined image.

Patent
12 Feb 1988
TL;DR: A vehicle detection system for providing data characteristic of traffic conditions includes a camera overlooking a roadway section for providing video signals representative of the field (traffic scene), and a digitizer for digitizing these signals and providing successive arrays of pixels (picture elements) characteristic of the scene at successive points in space and time.
Abstract: A vehicle detection system for providing data characteristic of traffic conditions includes a camera overlooking a roadway section for providing video signals representative of the field (traffic scene), and a digitizer for digitizing these signals and providing successive arrays of pixels (picture elements) characteristic of the field at successive points in space and time. A video monitor coupled to the camera provides a visual image of the field of view. Through use of a terminal and in conjunction with the monitor, an operator controls a formatter so as to select a subarray of pixels corresponding to specific sections in the field of view. A microprocessor then processes the intensity values representative of the selected portion of the field of view in accordance with spatial and/or temporal processing methods to generate data characteristic of the presence and passage of vehicles. This data can be utilized for real-time traffic surveillance and control, or stored in memory for subsequent processing and evaluation of traffic flow conditions.

Journal ArticleDOI
TL;DR: The present work presents the results of initial studies of a method that uses computer vision to determine the deformations of subsets of an object and the most significant parameters are found to be the number of quantization levels in the digitization process and the form of the intensity interpolation function.
Abstract: The results of initial studies to determine the key parameters influencing the performance of a computer-based measurement system are presented. The system components were modeled, and a representative intensity pattern was chosen and deformed by known amounts. The effect of varying different model parameters in the model was analyzed numerically. The most significant parameters were found to be: (1) the number of quantization levels in the digitization process, (2) the ratio of the frequency of the signal to the frequency of the sampling, and (3) the form of the intensity interpolation function.

Journal ArticleDOI
TL;DR: Simultaneous correction of nonuniform attenuation and detector response was implemented in single-photon-emission computed tomography (SPECT) image reconstruction and provides more-accurate quantitation and superior image quality.
Abstract: Simultaneous correction of nonuniform attenuation and detector response was implemented in single-photon-emission computed tomography (SPECT) image reconstruction. A ray-driven projector-backprojector that exactly models attenuation in the reconstructed image slice and the spatially variant detector response was developed and used in the iterative maximum-likelihood algorithm for the correction. A computer-generated heart-lung phantom was used in simulation studies to compare the simultaneous correction method with an intrinsic attenuation correction method using a smoothing filter, and intrinsic attenuation correction method using a deconvolution filter, and a modified Chang attenuation correction method using a nonuniform attenuation distribution. The results demonstrate that the present method provides more-accurate quantitation and superior image quality. >

Journal ArticleDOI
TL;DR: Experiments indicate that the procedure to estimate the unconstrained three-dimensional location and orientation of an object with a known shape when it is visible in a single image is very reliably accurate, although optimization can further improve estimates of the parameters.
Abstract: A procedure is presented to estimate the unconstrained three-dimensional location and orientation of an object with a known shape when it is visible in a single image. Using a generalized Hough transform, all six parameters of the object position are estimated from the distribution of values determined by matching triples of points on the object to possibly corresponding triples in the image. Most likely candidates for location are found, and then the remaining rotational parameters are evaluated. Two solutions are generally admitted to the distribution by every match of triangles. The number of possible matches is reduced by checking a simple geometric relation among triples. Even with partial occlusion, experiments indicate that the procedure is very reliably accurate, although optimization can further improve estimates of the parameters. >

Journal ArticleDOI
TL;DR: The use of the deconvolution method appears to be clinically applicable to a variety of digital projection images and demonstrates the nonarbitrary removal of scatter, increased radiographic contrast, and improved quantitative accuracy.
Abstract: The distribution of scattered x rays detected in a two-dimensional projection radiograph at diagnostic x-ray energies is measured as a function of field size and object thickness at a fixed x-ray potential and air gap. An image intensifier-TV based imaging system is used for image acquisition, manipulation, and analysis. A scatter point spread function (PSF) with an assumed linear, spatially invariant response is modeled as a modified Gaussian distribution, and is characterized by two parameters describing the width of the distribution and the fraction of scattered events detected. The PSF parameters are determined from analysis of images obtained with radio-opaque lead disks centrally placed on the source side of a homogeneous phantom. Analytical methods are used to convert the PSF into the frequency domain. Numerical inversion provides an inverse filter that operates on frequency transformed, scatter degraded images. Resultant inverse transformed images demonstrate the nonarbitrary removal of scatter, increased radiographic contrast, and improved quantitative accuracy. The use of the deconvolution method appears to be clinically applicable to a variety of digital projection images.

Journal ArticleDOI
TL;DR: A novel two-dimensional subband coding technique is presented that can be applied to images as well as speech and has a performance that is comparable to that of more complex coding techniques.
Abstract: A novel two-dimensional subband coding technique is presented that can be applied to images as well as speech. A frequency-band decomposition of the image is carried out by means of 2D separable quadrature mirror filters, which split the image spectrum into 16 equal-rate subbands. These 16 parallel subband signals are regarded as a 16-dimensional vector source and coded as such using vector quantization. In the asymptotic case of high bit rates, a theoretical analysis yields that a lower bound to the gain is attainable by choosing this approach over scalar quantization of each subband with an optimal bit allocation. It is shown that vector quantization in this scheme has several advantages over coding the subbands separately. Experimental results are given, and it is shown the scheme has a performance that is comparable to that of more complex coding techniques. >