Journal Article•

# Mathematical Analysis of Random Noise-Conclusion

About: This article is published in Bell System Technical Journal.The article was published on 1945-01-01 and is currently open access. It has received 807 citation(s) till now.

...read more

##### Citations

More filters

••

TL;DR: There is a natural uncertainty principle between detection and localization performance, which are the two main goals, and with this principle a single operator shape is derived which is optimal at any scale.

Abstract: This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge.

26,639 citations

••

TL;DR: This chapter discusses the maximum-likelihood heavy-atom parameter refinement for multiple isomorphous replacement (MIR) and multiwavelength anomalous diffraction (MAD) and its extension to probability distributions incorporating anomalousdiffraction effects, as well as measurement error and nonisomorphism.

Abstract: Publisher Summary This chapter discusses the maximum-likelihood heavy-atom parameter refinement for multiple isomorphous replacement (MIR) and multiwavelength anomalous diffraction (MAD) The chapter describes its extension to probability distributions incorporating anomalous diffraction effects, as well as measurement error and nonisomorphism Integrating these distributions in a whole complex plane leads to likelihood functions that can be used for heavy-atom detection and refinement and for producing phase-probability distributions The current implementation of this formalism in the computer program statistical heavy-atom refinement and phasing (SHARP) is also described in the chapter Likelihood functions can be used for the final phasing and calculation of Hendrickson–Lattman coefficients Numerical tests have been performed for three types of common refinements—namely, single isomorphous replacement, multiple isomorphous replacement with anomalous scattering (MIRAS), and MAD—and the results are summarized in the chapter A key feature of SHARP is its ability to refine lack-of-isomorphism parameters along with all the others

1,948 citations

••

TL;DR: In this paper, an algorithm for solving the stereoscopic matching problem is proposed, which consists of five steps: (1) each image is filtered at different orientations with bar masks of four sizes that increase with eccentricity.

Abstract: An algorithm is proposed for solving the stereoscopic matching problem. The algorithm consists of five steps: (1) Each image is filtered at different orientations with bar masks of four sizes that increase with eccentricity; the equivalent filters are one or two octaves wide. (2) Zero-crossings in the filtered images, which roughly correspond to edges, are localized. Positions of the ends of lines and edges are also found. (3) For each mask orientation and size, matching takes place between pairs of zero-crossings or terminations of the same sign in the two images, for a range of disparities up to about the width of the mask’s central region. (4) Wide masks can control vergence movements, thus causing small masks to come into correspondence. (5) When a correspondence is achieved, it is stored in a dynamic buffer, called the 2½-D sketch.

1,635 citations

••

TL;DR: In this paper, the axisymmetric turbulent incompressible and isothermal jet was investigated by use of linearized constant-temperature hot-wire anemometers and the quantities measured include mean velocity, turbulence stresses, intermittency, skewness and flatness factors, correlations, scales, low-frequency spectra and convection velocity.

Abstract: The axisymmetric turbulent incompressible and isothermal jet was investigated by use of linearized constant-temperature hot-wire anemometers. It was established that the jet was truly self-preserving some 70 diameters downstream of the nozzle and most of the measurements were made in excess of this distance. The quantities measured include mean velocity, turbulence stresses, intermittency, skewness and flatness factors, correlations, scales, low-frequency spectra and convection velocity. The r.m.s. values of the various velocity fluctuations differ from those measured previously as a result of lack of self-preservation and insufficient frequency range in the instrumentation of the previous investigations. It appears that Taylor's hypothesis is not applicable to this flow, but the use of convection velocity of the appropriate scale for the transformation from temporal to spatial quantities appears appropriate. The energy balance was calculated from the various measured quantities and the result is quite different from the recent measurements of Sami (1967), which were obtained twenty diameters downstream from the nozzle. In light of these measurements some previous hypotheses about the turbulent structure and the transport phenomena are discussed. Some of the quantities were obtained by two or more different methods, and their relative merits and accuracy are assessed.

1,252 citations

••

TL;DR: A review of classical percolation theory is presented, with an emphasis on novel applications to statistical topography, turbulent diffusion, and heterogeneous media as discussed by the authors, where a geometrical approach to studying transport in random media, which captures essential qualitative features of the described phenomena, is advocated.

Abstract: A review of classical percolation theory is presented, with an emphasis on novel applications to statistical topography, turbulent diffusion, and heterogeneous media. Statistical topography involves the geometrical properties of the isosets (contour lines or surfaces) of a random potential $\ensuremath{\psi}(\mathrm{x})$. For rapidly decaying correlations of $\ensuremath{\psi}$, the isopotentials fall into the same universality class as the perimeters of percolation clusters. The topography of long-range correlated potentials involves many length scales and is associated either with the correlated percolation problem or with Mandelbrot's fractional Brownian reliefs. In all cases, the concept of fractal dimension is particularly fruitful in characterizing the geometry of random fields. The physical applications of statistical topography include diffusion in random velocity fields, heat and particle transport in turbulent plasmas, quantum Hall effect, magnetoresistance in inhomogeneous conductors with the classical Hall effect, and many others where random isopotentials are relevant. A geometrical approach to studying transport in random media, which captures essential qualitative features of the described phenomena, is advocated.

1,014 citations