scispace - formally typeset
Search or ask a question
Author

Irving S. Reed

Bio: Irving S. Reed is an academic researcher from University of Southern California. The author has contributed to research in topics: Very-large-scale integration & Berlekamp–Welch algorithm. The author has an hindex of 40, co-authored 186 publications receiving 6326 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: Both theoretical and computer simulation results show that the SNR improvement factor of this algorithm using multiple band scenes over the single scene of maximum SNR can be substantial and illustrates that the generalized SNR of the test using the full data array is always greater than that of using partial data array.
Abstract: A constant false alarm rate (CFAR) detection algorithm (see J.Y. Chen and I.S. Reed, IEEE Trans. Aerosp. Electron. Syst., vol.AES-23, no.1, Jan. 1987) is generalized to a test which is able to detect the presence of known optical signal pattern which has nonnegligible unknown relative intensities in several signal-plus-noise bands or channels. This test and its statistics are analytically evaluated, and the signal-to-noise ratio (SNR) performance improvement is analyzed. Both theoretical and computer simulation results show that the SNR improvement factor of this algorithm using multiple band scenes over the single scene of maximum SNR can be substantial. The SNR gain of this detection algorithm is compared to the previously published one. It illustrates that the generalized SNR of the test using the full data array is always greater than that of using partial data array. The database used to simulate this adaptive CFAR test is obtained from actual image scenes. >

1,658 citations

Journal ArticleDOI
TL;DR: The results of a study to evaluate the 3-D matched filtering processor demonstrate the capability and robustness of the processor, and show that the algorithms, although somewhat complicated, can be implemented readily.
Abstract: Three-dimensional (3-D) matched filtering has been suggested as a powerful processing technique for detecting weak, moving optical targets immersed in a background noise field. The procedure requires the processing of entire sequences of frames of optical scenes containing the moving targets. The 3-D processor must be properly matched to the target signature and its velocity vector, but will simultaneously detect all targets to which it is matched. The results of a study to evaluate the 3-D processor are presented. Simulation results are reported which show the ability of the processor to detect targets well below the background level. These results demonstrate the capability and robustness of the processor, and show that the algorithms, although somewhat complicated, can be implemented readily. Some effects on the number of frames processed, target flight scenarios, and velocity and signature mismatch are also presented. The ability to detect multiple targets is demonstrated. >

257 citations

Journal ArticleDOI
TL;DR: In this paper, a new constant false alarm rate (CFAR) detector is developed as an application of the classical generalized maximum likelihood ratio test of Neyman and Pearson, which exhibits the desirable property that its probability of a false alarm is independent of the covariance matrix of the actual noise.
Abstract: There is active interest in the development of algorithms for detecting weak stationary optical and IR targets in a heavy opticalclutter background. Often only poor detectability of low signal-to-noise ratio (SNR) targets is achieved when the direct correlation method is used. In many cases, this is partly obviated by using detection with correlated reference scenes [1, 2].This paper uses the experimentally justified assumption that most optical clutter can be modeled as a whitened Gaussian randomprocess with a rapidly space-varying mean and a more slowlyvarying covariance [2]. With this assumption, a new constant falsealarm rate (CFAR) detector is developed as an application of the classical generalized maximum likelihood ratio test of Neyman and Pearson. The final CFAR test is a dimensionless ratio. This test exhibits the desirable property that its probability of a false alarm(PFA) is independent of the covariance matrix of the actual noiseencountered. When the underlying noise processes are complex intime, similar considerations can yield a sidelobe canceler CFARdetection criterion for radar and communications. Performance analyses based on the probability of detection (PD)versus signal-to-noise ratio for several given fixed false alarm probabilities are presented. Finally these performance curves are validated by computer simulations of the detection process which use real image data with artificially implanted signals.

256 citations

Journal ArticleDOI
TL;DR: It is shown that the problem of tracking a target having a fixed velocity can be cast into a general framework of three-dimensional filter theory and the design of these filters is presented, taking into account the target, clutter, and optical detection models.
Abstract: The standard approach to the detection of a stationary target immersed within an optically observed scene is to use integration to separate the target energy from the background clutter. When the target is nonstationary and moves with fixed velocity relative to the clutter, the procedure for integrating the target signal is no longer obvious. In this paper it is shown that the problem of tracking a target having a fixed velocity can be cast into a general framework of three-dimensional filter theory. From this point of view, the target detection problem reduces to the problem of finding optimal three-dimensional filters in the three-dimensional transform domain and processing the observed scene via this filtering. The design of these filters is presented, taking into account the target, clutter, and optical detection models. Performance is computed for a basic clutter model, showing the effective increase in detectability as a function of the target velocity. The three-dimensional transform approach is readily compatible with VLSI array processing technology.

198 citations

Journal ArticleDOI
TL;DR: This test exhibits the desirable property that its PFA is independent of the covariance matrix (level and structure) of the actual noise encountered; i.e., it is a CFAR (constant false alarm rate) test.

188 citations


Cited by
More filters
Proceedings Article
01 Jan 1994
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.

2,134 citations

01 Jan 1978
TL;DR: This ebook is the first authorized digital version of Kernighan and Ritchie's 1988 classic, The C Programming Language (2nd Ed.), and is a "must-have" reference for every serious programmer's digital library.
Abstract: This ebook is the first authorized digital version of Kernighan and Ritchie's 1988 classic, The C Programming Language (2nd Ed.). One of the best-selling programming books published in the last fifty years, "K&R" has been called everything from the "bible" to "a landmark in computer science" and it has influenced generations of programmers. Available now for all leading ebook platforms, this concise and beautifully written text is a "must-have" reference for every serious programmers digital library. As modestly described by the authors in the Preface to the First Edition, this "is not an introductory programming manual; it assumes some familiarity with basic programming concepts like variables, assignment statements, loops, and functions. Nonetheless, a novice programmer should be able to read along and pick up the language, although access to a more knowledgeable colleague will help."

2,120 citations

Journal ArticleDOI
TL;DR: It is shown that a simple scaling of the projection of tentative weights, in the subspace orthogonal to the linear constraints, can be used to satisfy the quadratic inequality constraint.
Abstract: Adaptive beamforming algorithms can be extremely sensitive to slight errors in array characteristics. Errors which are uncorrelated from sensor to sensor pass through the beamformer like uncorrelated or spatially white noise. Hence, gain against white noise is a measure of robustness. A new algorithm is presented which includes a quadratic inequality constraint on the array gain against uncorrelated noise, while minimizing output power subject to multiple linear equality constraints. It is shown that a simple scaling of the projection of tentative weights, in the subspace orthogonal to the linear constraints, can be used to satisfy the quadratic inequality constraint. Moreover, this scaling is equivalent to a projection onto the quadratic constraint boundary so that the usual favorable properties of projection algorithms apply. This leads to a simple, effective, robust adaptive beamforming algorithm in which all constraints are satisfied exactly at each step and roundoff errors do not accumulate. The algorithm is then extended to the case of a more general quadratic constraint.

1,851 citations

Journal ArticleDOI
TL;DR: Both theoretical and computer simulation results show that the SNR improvement factor of this algorithm using multiple band scenes over the single scene of maximum SNR can be substantial and illustrates that the generalized SNR of the test using the full data array is always greater than that of using partial data array.
Abstract: A constant false alarm rate (CFAR) detection algorithm (see J.Y. Chen and I.S. Reed, IEEE Trans. Aerosp. Electron. Syst., vol.AES-23, no.1, Jan. 1987) is generalized to a test which is able to detect the presence of known optical signal pattern which has nonnegligible unknown relative intensities in several signal-plus-noise bands or channels. This test and its statistics are analytically evaluated, and the signal-to-noise ratio (SNR) performance improvement is analyzed. Both theoretical and computer simulation results show that the SNR improvement factor of this algorithm using multiple band scenes over the single scene of maximum SNR can be substantial. The SNR gain of this detection algorithm is compared to the previously published one. It illustrates that the generalized SNR of the test using the full data array is always greater than that of using partial data array. The database used to simulate this adaptive CFAR test is obtained from actual image scenes. >

1,658 citations

Journal ArticleDOI
TL;DR: A tutorial/overview cross section of some relevant hyperspectral data analysis methods and algorithms, organized in six main topics: data fusion, unmixing, classification, target detection, physical parameter retrieval, and fast computing.
Abstract: Hyperspectral remote sensing technology has advanced significantly in the past two decades. Current sensors onboard airborne and spaceborne platforms cover large areas of the Earth surface with unprecedented spectral, spatial, and temporal resolutions. These characteristics enable a myriad of applications requiring fine identification of materials or estimation of physical parameters. Very often, these applications rely on sophisticated and complex data analysis methods. The sources of difficulties are, namely, the high dimensionality and size of the hyperspectral data, the spectral mixing (linear and nonlinear), and the degradation mechanisms associated to the measurement process such as noise and atmospheric effects. This paper presents a tutorial/overview cross section of some relevant hyperspectral data analysis methods and algorithms, organized in six main topics: data fusion, unmixing, classification, target detection, physical parameter retrieval, and fast computing. In all topics, we describe the state-of-the-art, provide illustrative examples, and point to future challenges and research directions.

1,604 citations