scispace - formally typeset
Search or ask a question

Showing papers on "Pixel published in 1981"


Journal ArticleDOI
TL;DR: In this paper, surface radiant temperature fields of subpixel spatial resolution from satellites which contain more than one channel in the thermal infrared spectral region are measured in terms of contributions from two temperature fields, each occupying a portion of the pixel, where the portions are not necessarily contiguous.

654 citations


01 Jul 1981
TL;DR: A model for grey-tone image enhancement using the concept of fuzzy sets is suggested and the reduction of the "index of fuzziness" and "entropy" for different enhanced outputs (corresponding to different values of fuzzifiers) is demonstrated.
Abstract: A model for grey-tone image enhancement using the concept of fuzzy sets is suggested. It involves primary enhancement, smoothing, and then final enhancement. The algorithm for both the primary and final enhancements includes the extraction of fuzzy properties corresponding to pixels and then successive applications of the fuzzy operator "contrast intensifier" on the property plane. The three different smoothing techniques considered in the experiment are defocussing, averaging, and maxmin rule over the neighbors of a pixel. The reduction of the "index of fuzziness" and "entropy" for different enhanced outputs (corresponding to different values of fuzzifiers) is demonstrated for an English script input. Enhanced output as obtained by histogram modification technique is also presented for comparison.

327 citations


Journal ArticleDOI
TL;DR: The segmentation algorithm proposed in this paper is a complex form of thresholding which utilizes multiple thresholds and not only works well for simple images but also produces reasonable segmentations for complex images.

287 citations


BookDOI
01 Jan 1981
TL;DR: This chapter focuses on three-Dimensional Motion Estimation, which involves estimating the Translation for Video Images of Moving Objects and Modeling Temporal Variations of Image Functions Caused by Moving Objects.
Abstract: I Introduction and Survey.- 1. Image Sequence Analysis: Motion Estimation.- 1.1 Outline of Book.- 1.2 Estimation of Two-Dimensional Translation.- 1.2.1 The Fourier Method.- 1.2.2 Matching.- 1.2.3 The Method of Differentials.- 1.3 Estimation of General Two-Dimensional Motion.- 1.4 Estimation of Three-Dimensional Motion: A Two-Step Method.- 1.4.1 Estimating Image-Space Shifts.- 1.4.2 Determining Motion Parameters - The Case of Three-Dimensional Translation.- 1.4.3 Determining Motion Parameters - The General Three-Dimensional Case.- 1.5 Estimation of Three-Dimensional Motion: A Direct Method.- 1.6 Summary.- References.- 2. Image Sequence Analysis: What Can We Learn from Applications?.- 1. Introduction.- 1.1 Long-Range Implications of Image Sequence Analysis.- 1.2 Scope of this Contribution.- 2. Application-Oriented Review.- 2.1 Coding of Image Sequences.- 2.1.1 Coarse Attributes of Broadcast TV-Frame Sequences.- 2.1.2 Predefined Frame Segmentation.- 2.1.3 Towards Variable Spatial Segmentation.- 2.1.4 Spatial Segmentation Based on Temporal Characteristics.- 2.1.5 Reduction of Spatial Bandwidth in Moving Subimages.- 2.1.6 Interframe Coding Based on Movement Compensation.- 2.1.7 Coding of Color Video Sequences.- 2.1.8 Discussion.- 2.2 Image Sequences from Airborne and Satellite Sensors.- 2.2.1 Horizontal Wind Velocities Derived from Image Sequences in the Visual Channel.- 2.2.2 Image Sequences Including the Infrared Channel.- 2.2.3 Formation and Refinement of Meteorological and Geological Knowledge.- 2.2.4 Registration of Images and Production of Mosaics.- 2.2.5 Change Detection.- 2.2.6 Cover-Type Mapping Based on Time-Varying Imagery.- 2.2.7 Discussion.- 2.3 Medicine: Image Sequences of the Human Body.- 2.3.1 Preprocessing of Image Sequences.- 2.3.2 Blood Circulation Studies.- 2.3.3 Delineating Images of the Heart for the Study of Dynamic Shape Variations.- 2.3.4 Isolation of Organs Based on Spectral and Temporal Pixel Characteristics.- 2.3.5 Quantitative Description, Categorization, and Modeling of Organ Functions.- 2.3.6 Body Surface Potential Maps.- 2.3.7 Studying the Pupil of the Human Eye.- 2.4 Biomedical Applications.- 2.5 Behavioral Studies.- 2.6 Object Tracking in Outdoor Scenes.- 2.6.1 Traffic Monitoring.- 2.6.2 Target Tracking.- 2.7 Industrial Automation and Robotics.- 2.8 Spatial Image Sequences.- 2.8.1 No Explicit Models: Presentation of Images from Spatial Slices.- 2.8.2 Isolation, Tracking, and Representation of Linelike Features in 3-D Space.- 2.8.3 Object Surfaces Derived from Contour Measurements in a Series of Slices.- 2.8.4 Surface Detection in Samples on a 3-D Grid.- 2.8.5 Volume Growing.- 2.8.6 Deriving Descriptions Based on Volume Primitives.- 2.8.7 Estimating Parameters of Spatial Models by Statistical Evaluation of Planar Sections (Stereology).- 2.8.8 Discussion.- 3. Modeling Temporal Variations of Image Functions Caused by Moving Objects.- 3.1 Estimating the Translation for Video Images of Moving Objects.- 3.2 Including Image Plane Rotation and Scale Changes into the Displacement Characteristic.- 3.3 Discussion.- 4. Conclusions.- 5. Acknowledgements.- 6. References.- 7. Author Index.- II Image Sequence Coding, Enhancement, and Segmentation.- 3. Image Sequence Coding.- 3.1 Overvi ew.- 3.2 The Television Signal.- 3.2.1 The Digital Television Signal.- a) Scanning.- b) Spectrum of Scanned Signal.- c) Sampling.- 3.2.2 Characterization of the Sampled Video Signal.- 3.3 Some Relevant Psychovisual Properties of the Viewer.- 3.3.1 Spatiotemporal Response of the Human Visual System.- 3.3.2 Perception in Moving Areas.- 3.3.3 Temporal Masking.- 3.3.4 Exchange of Spatial, Temporal, and Amplitude Resolution.- 3.4 Predictive Coding.- 3.4.1 Philosophy of Predictive Coding.- 3.4.2 Predictor Design.- a) Linear Predictors.- b) Nonlinear Predictors.- 3.4.3 Quantization.- 3.4.4 Code Assignment.- a) Variable-Word-Length Coding.- b) Run-Length Coding.- 3.5 Movement-Compensated Prediction.- 3.5.1 General.- 3.5.2 Block-Structured Movement-Compensated Coders.- a) Displacement Estimation.- b) Results.- 3.5.3 Pel-Recursive Movement-Compensated Coders.- a) Pel-Recursive Displacement Estimation.- b) Coder Operation.- 3.5.4 Code Assignment.- 3.6 Transform Coding.- 3.6.1 General.- 3.6.2 Coding of the Transform Coefficients.- 3.6.3 Types of Transforms.- 3.6.4 Adaptive Coding of Transform Coefficients.- 3.6.5 Hybrid Transform/DPCM Coding.- 3.7 Multimode Coders.- 3.7.1 Overview.- 3.7.2 Techniques Used in Multimode Coding.- a) Subsampling.- b) Temporal Filtering.- c) Change of Thresholds.- d) Switched Quantizers.- 3.7.3 Choice and Ordering of Modes of Operation.- 3.7.4 Multimode Coder Example.- 3.8 Color Coding.- 3.8.1 The NTSC Composite Video Signal.- 3.8.2 Three-Dimensional Spectrum of the NTSC Composite Signal.- 3.8.3 Predictive Coding.- 3.9 Concluding Remarks.- Appendix A: A Digital Television Sequence Store (DVS).- A. 1 Capabi Titles.- A. 2 The System.- A.3 Software.- References.- 4. Image Sequence Erihaneement.- 4.1 Temporal Filtering.- 4.1.1 Straight Temporal Filtering.- 4.1.2 Motion-Compensated Temporal Filtering.- 4.2 Temporal Filtering with Motion Compensation by Matching.- 4.2.1 Motion Estimation by Matching.- 4.2.2 Experiment Results of Filtering.- 4.2.3 Discussions.- 4.3 Temporal Filtering with Motion Compensation by the Method of Differentials.- 4.3.1 Motion Estimation by the Method of Differentials.- 4.3.2 Various Factors Influencing Motion Estimation.- 4.3.3 Experimental Results of Filtering.- 4.3.4 Discussions.- 4.4 Summary.- References.- 5. Image Region Extraction of Moving Objects.- 5.1 Overview.- 5.1.1 Symbolic Description.- 5.1.2 Sequences.- 5.1.3 Planning.- 5.2 Vector Field.- 5.2.1 Sampli ng.- 5.2.2 Noise.- 5.2.3 Motion Effects.- 5.2.4 Plane Equation.- 5.3 Region Extraction.- 5.3.1 Node Consistency.- 5.3.2 Arc Consistency.- 5.3.3 Region Attributes.- 5.3.4 Example.- 5.4 Sequences.- 5.4.1 Similarity.- 5.4.2 Identity.- 5.4.3 Simple Sequences.- 5.4.4 Compound Sequences.- 5.5 Planning.- 5.6 Resume.- 5.6.1 Hierarchy.- 5.6.2 Outlook.- References.- 6. Analyzing Dynamic Scenes Containing Multiple Moving Objects.- 6.1 Occlusion in General.- 6.1.1 Arbitrary Images.- 6.1.2 Scene Domain Imposed Constraints.- 6.1.3 Occlusion in Image Sequences.- 6.2 Dot Pattern Analysis.- 6.2.1 Combined Motion and Correspondence Processes.- 6.2.2 Separate Correspondence Determination.- 6.2.3 Motion Analysis Given Dot Correspondence.- 6.3 Edge and Boundary Analysis.- 6.3.1 Straight Edge Domain.- 6.3.2 Curvilinear Boundary Domain.- 6.4 Conclusion.- References.- III Medical Applications.- 7. Processing of Medical Image Sequences.- 7.1 Extraction of Measurements from Image Time Sequences.- 7.1.1 Left Ventricular Shape-Versus-Time Determination.- a) Determination of Approximate Ventricular Boundaries by Motion Extraction.- b) Threshold.- c) Boundary Extraction.- 7.1.2 Determination of Precise Ventricle Boundaries Using Prediction Techniques.- a) Absolute Gradient Maximum.- b) Local Gradient Maximum ".- c) Four-Feature Majority Voting.- d) Special Condition to Ignore Outer Heart Wall.- e) Postprocessing.- 7.1.3 Results.- 7.1.4 Videodensitometry.- 7.2 Functional Images.- 7.3 Image Enhancement.- 7.3.1 Motion Deblurring.- 7.3.2 Long-Term Change Detection.- 7.4 Spatial Sequence.- 7.4.1 Electron and Light Micrograph Series.- 7.4.2 Series of Ultrasonic Data.- 7.4.3 Stacks of Computerized Tomograms.- 7.5 Frequency Series.- 7.6 Summary.- References.- Additional References.

285 citations


Journal ArticleDOI
TL;DR: This paper reviews box-filtering techniques and also describes some useful extensions of the box filtering technique.

229 citations


Patent
Brian F. Walsh1, David E. Halpert1
12 Sep 1981
TL;DR: In this paper, the printer increases the density of the information elements and simultaneously provides rounding off of character edges and smoothing of diagonals by applying the outputs of the shift registers to a decoder and generating driving signals for the printer head.
Abstract: The present invention enhances the resolution and quality of characters of a system receiving the information initially in the form of video display pixels and providing hard copy output. This is accomplished by storing at least three successive lines of video data in successive, parallel connected shift registers, applying the outputs of the shift registers to a decoder, and generating driving signals for the printer head. The decoder compares the pixels on the same line as well as in preceeding and succeeding lines that surround each specific input pixel to generate the printer head driving signals according to whether straight or curved line segments are to be formed. In effect, the printer increases the density of the information elements and simultaneously provides rounding off of character edges and smoothing of diagonals.

188 citations


Patent
12 Jan 1981
TL;DR: In this paper, the capacitance characteristics of a conductive-tipped stylus over the surface of the display pad are sensed by sense buffers disposed along the columns of the matrix, as the rows are scanned at a prescribed scanning rate.
Abstract: A graphics input/output device contains a graphics input pad having an array of transparent capacitive pixels the capacitance characteristics of which are changed in response to the passing of a conductive-tipped stylus over the surface of the pad. This change in capacitance is sensed by sense buffers disposed along the columns of the matrix, as the rows are scanned at a prescribed scanning rate. The sensed data is read out of the sense buffer and loaded into a RAM. An array of display pixels formed of an LCD matrix is addressed by a scan sequence control unit, and the energization of the display pixels is multiplexed with the read-out scanning of the sensed data, so as to present to the user a real time generated image of the graphics created by the stylus. As a result, it appears to the user that the stylus is actually "writing" on the display pad.

171 citations


Journal ArticleDOI
Crow1
TL;DR: Three antialiasing techniques were applied to a scene of moderate complexity and early results suggest that prefiltering is still the most computationally effective method.
Abstract: Three antialiasing techniques were applied to a scene of moderate complexity. Early results suggest that prefiltering is still the most computationally effective method.

134 citations


Journal ArticleDOI
TL;DR: In two-dimensional image reconstruction from line integrals using maximum likelihood, Bayesian, or minimum variance algorithms, the x-y plane on which the object estimate is defined is decomposed into nonoverlapping regions, or "pixels".
Abstract: In two-dimensional image reconstruction from line integrals using maximum likelihood, Bayesian, or minimum variance algorithms, the x-y plane on which the object estimate is defined is decomposed into nonoverlapping regions, or "pixels." This decomposition of an otherwise continuous structure results in significant errors, or model noise, which can exceed the effects of the fundamental measurement noise.

128 citations


Patent
20 Mar 1981
TL;DR: In this paper, an interactive image processing system (200,300) is presented which is capable of simultaneous processing of at least two different digitized composite color images to provide a displayable resultant composite color image.
Abstract: An improved interactive image processing system (200,300) is provided which is capable of simultaneous processing of at least two different digitized composite color images to provide a displayable resultant composite color image. Each of the digitized composite color images have separate digitized red, blue and green image components and have an associated image information content. The system (200,300) includes separate image storage planes (246,346,70',72',74',70",72",74",70"',72'", 74"',370,372,374,370',372',374',370",372",374) for retrievably storing each of the digitized red, blue and green image components or other image data as well as graphic planes (78',378) for storing graphic control data for processing of the images. The digital image processing of the image components is accomplished in a digital image processing portion (208,308) which includes an image processor (210,310) which contains the various storage planes in a refresh memory (246,346) which cooperates with a pipeline processor configuration (86'), image combine circuitry (270,272,274,270',272',274') and other control circuitry to enable the simultaneous processing between each of the corresponding image planes on a pixel by pixel basis under interactive control of a keyboard (50'), data tablet (54') or other interactive device. The system may be employed for interactive video processing (200) or as an interactive film printing system (300) in which the simultaneous processing of the two different images, which may be iterative, can be monitored in real time on a television monitor (44',315). In the video system (200), the combining format of the image planes may be interactively varied on a pixel-by-pixel basis by creating different digital control masks for each pixel which are stored in refresh memory (246,346). In either system (200,300), the interactive simultaneous digital processing of the images is accomplished in an RGB format.

113 citations


Patent
29 May 1981
TL;DR: In this paper, the pixels of the 3D images are represented by multivalued digital data signals which are analyzed in one or more programmable neighborhood transformation stages, each stage is programmed with selected contribution values associated with each pixel in the neighborhood.
Abstract: Image analyzing apparatus and methods are disclosed for analyzing 3-dimensional as well as 2-dimensional images. The pixels of the 3-D images may be represented by multivalued digital data signals which are analyzed in one or more programmable neighborhood transformation stages. In the preferred embodiment, each stage is programmed with selected contribution values associated with each pixel in the neighborhood. The values of the data signals for each pixel are modified by these contribution values and the maximum value thereof is selected as the transformation output of the stage. A series of dilation/erosion transformations may be used to transform the original image matrix in such a manner so as to locate the position and/or identify the shape of particular objects contained in the original image.

Proceedings ArticleDOI
01 Dec 1981
TL;DR: In this paper, the mean-squared error between the true and filtered images was evaluated in terms of the mean square error of the image model coefficient vector and pixel estimates, and bias-compensated least squares and correlation-based procedures were used to identify the parameters of autoregressive image models.
Abstract: Estimation of image pixel density can be performed using a reduced update Kalman filter provided that a mathematical model for the image generating process is available. To this effect various algorithms suitable for identifying the parameters of autoregressive image models are discussed and evaluated in terms of the mean-squared error between the true and filtered images. Algorithms considered include general and bias-compensated least-square procedures, a correlation-based algorithm, and procedures involving the simultaneous estimation of both the image model coefficient vector and pixel estimates. Experiments using two real images and two random fields indicate that bias-compensated least squares and correlation-based procedures might be most useful for image identification and adaptive filtering.


Journal ArticleDOI
TL;DR: If a picture contains dark objects on a light background (or vice versa), the objects can be extracted by thresholding, i.e., by classifying the pixels into ``light'' and ``dark'' classes.
Abstract: If a picture contains dark objects on a light background (or vice versa), the objects can be extracted by thresholding, i.e., by classifying the pixels into ``light'' and ``dark'' classes. If the picture is noisy, so that the object and background gray level populations overlap, there will be errors in the thresholded output. A relaxation process can be used to reduce these errors; we classify the pixels probabilistically, and then adjust the probabilities for each pixel, based on its neighbors' probabilities, with light reinforcing light and dark dark. When this adjustment process is iterated, the dark probabilities become very high for pixels that belong to dark regions, and vice versa, so that thresholding becomes trivial.

Journal ArticleDOI
TL;DR: In this paper, the effect of the detection of scattered radiation on the difference image is discussed and it is shown that a conventional scatter reduction grid will improve image quality only if the ratio of the detected scattered photons to the number of detected primary photons is greater than 0.8 when no grid is used.
Abstract: Considerations for the optimum design and use of a computerized fluoroscopy apparatus for performing time dependent image subtraction are presented. The advantages of logarithmic processing are presented. Assuming such processing, the interrelationship of achievable signal to noise, dynamic range and the minimum number of grey levels needed to digitize each image is discussed, and a formula relating these three quantities is derived. Image quality limits imposed by noise sources not associated with the detected x-ray fluence are discussed and a criterion for choosing a maximum x-ray fluence which will not waste patient dose is presented. The limits to spatial resolution achievable with conventional image intensifiers are discussed and it is shown that the maximum one dimensional spatial resolution in the object plane is achieved when the magnification of the x-ray system is such that the image of the x-ray focal spot projected through a point in the object plane onto the detector plane just covers the width of two pixels. The effect of the detection of scattered radiation on the difference image is discussed and it is shown that a conventional scatter reduction grid will improve image quality only if the ratio of the number of detected scattered photons to the number of detected primary photons is greater than 0.8 when no grid is used.

Proceedings ArticleDOI
01 Aug 1981
TL;DR: A parallel processing architecture is described and simulated which consists of a serial chain of processors which produces as output a depth sorted list of those objects which are at least potentially visible at each pixel.
Abstract: The continuing evolution of microelectronics provides the tools for developing new methods of synthesizing digital images by utilizing parallel processing architectures which hold the promise of reliability, flexibility and low cost. Beginning with the earliest real-time flight simulators, parallel processing architectures for image synthesis have been built, but "anti-aliasing" remains a problem. A parallel processing architecture is described and simulated which consists of a serial chain of processors which produces as output a depth sorted list of those objects which are at least potentially visible at each pixel. The lists are then filtered to provide the final shading at each pixel.

Patent
26 Jan 1981
TL;DR: In this paper, a method and apparatus for reducing the gray scale resolution of a document is presented, which includes a scanning module for scanning a document along x and y coordinates with regard to generating pixels representing gray scale values for discrete areas of the document along the x and Y coordinates, with each pixel having a predetermined number of bits.
Abstract: A method and apparatus for reducing the gray scale resolution of a document. The apparatus includes a scanning module for scanning a document along x and y coordinates with regard thereto for generating pixels representing gray scale values for discrete areas of the document along the x and y coordinates, with each pixel having a predetermined number of bits. A high pass filter module is also included for summing the associated pixels within a window to produce a window sum as the window is moved relatively along coordinates corresponding to the x and y coordinates, and the high pass filter module also includes means for comparing a selected pixel within a window with the associated window sum and predetermined criteria and for generating first and second output values in accordance therewith. The first and second output values have a fewer number of bits than the asssociated selected pixel.

Patent
29 Jan 1981
TL;DR: In this article, an electronic light valve of the kind having an imaging zone, an imagewise addressable light valve array, and means for directing illumination to the imaging zone via the array, operably associated with such apparatus, for reducing inter-pixel variation in light transmitted to the image zone.
Abstract: Electronic light valve of the kind having an imaging zone, an imagewise addressable light valve array, and means for directing illumination to the imaging zone via the array, operably associated with such apparatus, for reducing inter-pixel variation in light transmitted to the imaging zone. One disclosed embodiment includes a photo-bleachable mask having pixel portions corresponding to pixels of the light valve array, which have been photo-bleached and fixed at density levels compensating for such nonuniformities. Another embodiment includes a pixel mask comprising negative working photographic emulsion which has been exposed and developed to different compensating density levels.

01 Jan 1981
TL;DR: Although the success of texture synthesis is highly dependent on the texture itself and the modeling method chosen, general conclusions regarding the performance of various techniques are given.
Abstract: Numerous computational methods for generating and simulating binary and grey-level natural digital-image textures are proposed using a variety of stochastic models Pictorial results of each method are given and various aspects of each approach are discussed The quality of the natural texture simulations depends on the computation time for data collection, computation time for generation, and storage used in each process In most cases, as computation time and data storage increase, the visual match between the texture simulation and the parent texture improves Many textures are adequately simulated using simple models thus providing a potentially great information compression for many applications In most of the texture synthesis methods presented in this thesis, pixel values are generated one-at-a-time according to both the given model and the values of pixels previously generated in the synthesis until the image space is completely filled Nth-order joint density functions estimated from a natural texture sample were used for this purpose in one method The results are excellent but the storage required, even for binary textures, is large Therefore, a much simpler first-order linear, autoregressive model was applied to the texture synthesis problem Using this model on both binary and continuous-tone textures, each pixel is generated as a linear combination of previously generated pixels plus stationary noise The results indicate that many textures are satisfactorily simulated using this approach By adding cross-product terms, the first-order linear model is extended to a second-order linear model The simulation results improve slightly but the number of computations required for the statistics collection process increases drastically Non-stationary noise was then used in the synthesis process and further improvements in the quality of the simulations are achieved at the cost of increased storage Methods of texture simulation using more than one model are studied in this thesis These multiple-models are useful for many textures, especially those with macro-structure They also improve the fit of the model when applied to the parent texture data and therefore may produce improved simulations A final model, called the best-fit model, generates texture simulations directly from the parent texture itself Each pixel in the synthesis image is generated based on the similarity of its previously-generated, neighboring pixel values to pixel values in all similarly-shaped neighborhoods in the parent texture The measures of similarity at all points in the parent texture, along with a random variable, are used to generate the next pixel value in the synthesized image The synthesis results using the model are excellent but the synthesis process is very computationally demanding Although the success of texture synthesis is highly dependent on the texture itself and the modeling method chosen, general conclusions regarding the performance of various techniques are given Methods of texture segmentation and identification based on texture synthesis results are also presented

Journal ArticleDOI
TL;DR: Two distinct approaches to image-segmentation are described, both of which take the form of so-called region-growing algorithms, which are based on a binary relation named relative similarity relation which reflects relative properties in an image.

Proceedings ArticleDOI
P. M. Narendra1, N. A. Foss1
29 Dec 1981
TL;DR: A real-time compensation technique has been developed which utilizes the infrared (IR) scene itself for calibration and continually updates the compensation coefficients without the use of a thermal reference source or shutter.
Abstract: Staring infrared imagers typically exhibit large d.c. offset level variations and responsivity variations from pixel to pixel. In order to extract the scene information from the focal plane output signal, this characteristic fixed pattern noise must be normalized prior to display. Conventional techniques for this compensation involve the use of a uniform thermal reference source which is periodically introduced into the sensor field of view to act as a calibration of the offset and responsivity variations for each pixel. This viewing of a thermal reference source generally involves use of electromechanical or electro-optical shutters which detracts from the mechanical simplicity of the staring imager. A real-time compensation technique has been developed which utilizes the infrared (IR) scene itself for calibration and continually updates the compensation coefficients without the use of a thermal reference source or shutter. This "shutterless" compensation technique makes use of scene dynamics, averaged over a period of time, as an effective uniform reference source. The results of real-time simulations of this technique have been demonstrated using both FLIR and visible imagery. Results of these simulations are presented along with a discussion of applicable areas for this technique and approaches for real-time hardware implementation.© (1981) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal Article
TL;DR: In this paper, an analysis of pixel labeling by probabilistic relaxation techniques is presented to demonstrate that these labeling procedures degenerate to weighted averages in the vicinity of fixed points, leading to a deterioration of labeling accuracy at a stage after an improvement has already been achieved.
Abstract: An analysis of pixel labeling by probabilistic relaxation techniques is presented to demonstrate that these labeling procedures degenerate to weighted averages in the vicinity of fixed points. A consequence of this is that undesired label conversions can occur, leading to a deterioration of labeling accuracy at a stage after an improvement has already been achieved. Means for overcoming the accuracy deterioration are suggested and used as the basis for a possible design strategy for using probabilistic relaxation procedures. The results obtained are illustrated using simple data sets in which labeling on individual pixels can be examined and also using Landsat imagery to show application to data typical of that encountered in remote sensing applications.

Journal ArticleDOI
TL;DR: An algorithm is described for converting region boundaries in an image array into chain-encoded line structures, each described by a set of chain links, which is used for preprocessing an image in scene analysis.

Patent
16 Mar 1981
TL;DR: In this article, a method and apparatus for compressing digital data derived from an image using non-adaptive predictive techniques is presented, where a prediction table, which is pre-generated based on a number of sample images, generates a predicted pixel and source state for each pixel of the original image, as the image is scanned.
Abstract: A method and apparatus for compressing digital data derived from an image using non-adaptive predictive techniques. A prediction table, which is pre-generated based on a number of sample images, generates a predicted pixel and source state for each pixel of the original image, as the original image is scanned. The predicted pixel is the expected value of the pixel when considering the values of a group of adjoining pixels, while the source state is indicative of the probability that the predicted pixel is in error. Prediction error pixels are then generated and grouped according to their respective source states to form a plurality of run length symbols with each symbol comprising a white portion and a black portion, which symbols are stored sequentially in order of formation in a memory device according to their respective source states. The symbols are used to provide address data for memory devices which generate variable length code words that are transmitted to a receiving station and decoded using the reverse of the aforementioned process to form a facsimile image.

01 Dec 1981
TL;DR: This thesis presents an algorithm for detecting man-made objects embedded in low resolution imagery using a modified Kirsch edge operator for initial image enhancing and a normal Kirsch operator for edge detection.
Abstract: : This thesis presents an algorithm for detecting man-made objects embedded in low resolution imagery. A modified Kirsch edge operator is used for initial image enhancing. A normal Kirsch operator is then used for edge detection. A two-dimensional threshold for edges and the original intensity detects the pixels on the edges of the objects only. These pixels are then subjected to connectedness and size tests to detect the blobs which most probable represent man-made objects. The algorithm was tried on 325 pictures and a detection probability of 83.3% was achieved. False alarm probability was less than 10%. (Author)

Patent
24 Apr 1981
TL;DR: In this paper, an approach for reducing the effect of x-ray statistical noise and electronic noise in a fluorographic system that displays an X-ray image on a television screen is presented.
Abstract: Apparatus for reducing the effect of x-ray statistical noise and electronic noise in a fluorographic system that displays an x-ray image on a television screen. Analog video signals based on the x-ray image are amplified logarithmically and digitized to yield live pixel signals. Processed pixel signals are averaged in a full image store or memory. Motion is detected by subtracting the stored pixels from the live pixels on a pixel-by-pixel basis in an ALU. The difference resulting from subtraction is used as part of an address to a look-up table (LUT) which contains values equivalent to the difference signals times a noise reduction multiplicative factor, K. The other part of the addresses is the live pixel value. There are several replications of the look-up tables each relating to a particular brightness level range. The one selected is determined by the live signal part of the address which relates to brightness. The K times the difference signals in the ranges are chosen so the amount of noise reduction varies with brightness level as desired for logarithmic signals. The pixels processed as explained above are added in-phase with the stored and averaged pixels and returned to the corresponding full image memory locations.

Patent
19 Oct 1981
TL;DR: In this paper, a display subroutine effects fetching of a byte of pattern data for the selected pattern from a dedicated memory, which includes pixel codes that indicate whether corresponding pixels of a selected pattern are "transparent" or darkened.
Abstract: A computer is connected to a display device having a display memory system that is updated by the computer. The computer selects a pattern to be displayed and a screen location at which the selected pattern is to be displayed. A display subroutine effects fetching of a byte of pattern data for the selected pattern from a dedicated memory. The fetched pattern data includes pixel codes that indicate whether corresponding pixels of the selected pattern are "transparent" or darkened. The display subroutine then transmits the pixel codes to the display memory system. The display memory system includes circuitry that detects whether each transmitted pixel code represents a transparent pixel. If it does, the circuitry inhibits writing of that pixel code into a storage portion of the display memory system. The circuitry also enables writing of non-transparent pixel codes into the storage portion of the display memory system.

01 May 1981
TL;DR: In this paper, a strapped-down system for on-board, real-time spacecraft attitude determination is discussed, which is capable of sub-tenarc second precision with no moving parts.
Abstract: : A new strapped-down system for on-board, real-time spacecraft attitude determination is discussed. The electro-optical system is capable of sub-ten-arc second precision with no moving parts. The light-sensitive element is an array-type Charged Coupled Device (CCD) having about 2 x 10 to the 5th power silicon pixels. Parallel, high speed analog circuits scan the pixels (row by row) to locate and A/D convert only those pixel response values (about 100 to 200 per scan) about a preset analog threshold. Angular rate measurements from conventional rate gyros are used to estimate motion continuously. Three intermittently communicating microcomputers operate in parallel to perform the functions: (i) star image centroid determination, (ii) star pattern identification and discrete attitude estimation (subsets of measured stars are identified as specific cataloged stars), (iii) optimal Kalman attitude motion estimation/integration. The system is designed to be self-calibrating with provision for routine updating of interlock angles, gyro bias parameters, and other system calibration parameters. For redundancy and improved precision, two optical ports are employed. This interim report documents Phase I of a three phase effort to research, develop, and laboratory test the basic concepts of this new system. Included in Phase I is definition, formulation, and test of the basic algorithms, including preliminary implementations and results from a laboratory microcomputer system. (Author)

Patent
03 Jun 1981
TL;DR: In this paper, a one dimensional electronic halftone generating system with a source of digital data representative of pixel greyscale, a counter to store the digital data, and pulse producing logic responsive to the counter to activate a laser modulator in accordance with the digital values representative of each pixel.
Abstract: A one dimensional electronic halftone generating system having a source of digital data representative of pixel greyscale, a counter to store the digital data, and pulse producing logic responsive to the counter to activate a laser modulator in accordance with the digital data representative of each pixel. In particular, a six bit data word represents one of 64 greyscale states for a particular pixel. The pulse producing logic responds to the particular data word to produce a pulse of a given duration or width to drive the laser for a given time period. The duration of the pulse, representing one to 64 states for a given pixel, will produce a given discrete greyscale value for each pixel.

Patent
15 Jun 1981
TL;DR: In this article, the edge codes are retrieved form the map in display order to form a pixel data stream which in sequentially decoded by a look-up table and advanced through a pipeline latch for providing color and intensity control voltages to a D/A converter.
Abstract: Edge data codes forming an image to be displayed are entered into a random access memory map at addresses corresponding to the scanline number and pixel number of the edge in the display of the image. The edge codes may be entered into the memory map in any sequence (i.e. the sequence of availability, the sequence of generation, or the sequence of display). The addresses in the map which do not receive edge codes, are filled with zeros. The edge codes are retrieved form the map in display order to form a pixel data stream which in sequentially decoded by a look-up table and advanced through a pipeline latch for providing color and intensity control voltages to a D/A converter. Clocked pulses through a timing gate advance each new decoded edge code into the latch. The zeros between the edge codes are detected and disable the timing gate during the non-transition period between edge codes. Each edge code remains latched during the non-transition period between transitions causing the continuous display thereof during the non-transition period. Predetermined non-zero codes may be separately detected to provide formating control voltages which control other display features such as resolution and color scales. The subject matter of this application relates to the subject matter of U.S. patent application Ser. No. 148,964, entitled Composite Display Device for Combining Image Data and Method, filed May 12, 1980 by the present assignee.