scispace - formally typeset
Search or ask a question

Showing papers on "Kernel (image processing) published in 1989"


Journal ArticleDOI
01 Aug 1989
TL;DR: In this paper, the authors show how to convert the order-8 cosine transforms into a family of integer cosine transform (ICTs) using the theory of dyadic symmetry.
Abstract: The paper shows how to convert the order-8 cosine transforms into a family of integer cosine transforms (ICTs) using the theory of dyadic symmetry. The new transforms can be implemented using simple integer arithmetic. It was found that performance close to that of the DCT can be achieved with an ICT that requires only 4 bits for representation of its kernel com- ponent magnitude. Better performance can be achieved by some ICTs whose kernel components require longer bit lengths for representation. ICTs that require 3 bits or less for representation of their component magnitude are available but with degraded performance. The availability of many ICTs provides an engineer the freedom to tradeoff performance for simple implementation in design- ing a transform codec.

163 citations


Journal ArticleDOI
TL;DR: An LoG of space constant σ can be decomposed into the product of a Gaussian and an LoG mask, and the resulting LoG has space Constant σ1.
Abstract: An LoG of space constant σ can be decomposed into the product of a Gaussian and an LoG mask (Chen, Huertas, and Medioni, IEEE Trans. Pattern Anal. Mach. Intell. PAMI-9, 1987, 584–590). The resulting LoG has space constant σ1

125 citations


Patent
18 May 1989
TL;DR: In this article, an apparatus and method for removing background noise and high frequency noise from an image by comparing each pixel in the image with neighboring pixels defining a variably shaped and sized kernel is presented.
Abstract: An apparatus and method for removing background noise and high frequency noise from an image by comparing each pixel in the image with neighboring pixels defining a variably shaped and sized kernel. The size and shape of the kernel are optimized for the particular characteristics of the data to be analyzed.

46 citations


Patent
14 Jul 1989
TL;DR: Adaptive anisotropic digital filtering is automatically applied to the graycale values of substantially of all the pixels in at least a portion of a digital image by, for each such pixel, selecting and angularly orienting the matrix of coefficients of an adaptive filter kernel as a weighting function of the grey scale values of pixels in eight, 45 degree sectors under the kernel, and then performing a convolution of the gray scale values under the pixel with the rotated kernel to generate a filtered value for the center pixel as mentioned in this paper.
Abstract: Adaptive anisotropic digital filtering is automatically applied to the graycale values of substantially of all the pixels in at least a portion of a digital image by, for each such pixel, selecting and angularly orienting the matrix of coefficients of an anisotropic filter kernel as a weighting function of the gray scale values of pixels in eight, 45 degree sectors under the kernel, and then performing a convolution of the gray scale values of pixels under the kernel with the rotated kernel to generate a filtered value for the center pixel. Local brightness of the image is adjusted either in conjunction with anisotropic filtering or separately by comparing for each pixel the average gray scale value of the pixels under the kernel with a reference value and applying a bias function to the center pixel grey scale value to brighten or darken the image dependent upon the results of the comparison.

41 citations


Proceedings ArticleDOI
23 Jun 1989
TL;DR: A direct form of both the FFT (fast Fourier transform) and the FWT (fast Walsh transform) as applied to a hexagonal lattice of data points is shown, finding that the hexagonal sampling of the image at a lower resolution retained the necessary resolution as required for the rest of theimage software.
Abstract: The author describes the development of the underlying theory of processing hexagonally sampled digital images. He shows a direct form of both the FFT (fast Fourier transform) and the FWT (fast Walsh transform) as applied to a hexagonal lattice of data points. Advantages spring from a reduction in the number of data locations and a reduction in computational load per data point. The complete signal flow graph for a minimal hexagonal kernel for both the FFT and the FWT is shown. The derived transforms were implemented in software and compared to the standard 2-D FFT on standard images in the image processing laboratory. It was found that the hexagonal sampling of the image at a lower resolution retained the necessary resolution as required for the rest of the image software. >

24 citations


Journal ArticleDOI
TL;DR: In this article, a nonlinear surface acoustic wave analysis on an anisotropic elastic half-space is adapted to include nonlinear electroelastic interaction, where only quadratically nonlinear terms are retained.

23 citations


Proceedings ArticleDOI
01 Nov 1989
TL;DR: An analysis of a recently-proposed two-parameter piecewise-cubic convolution algorithm for image reconstruction indicates that the additional parameter does not improve the reconstruction fidelity - the optimal two- parameter convolution kernel is identical to the optimal kernel for the traditional one-parameters algorithm.
Abstract: This paper presents an analysis of a recently-proposed two-parameter piecewise-cubic convolution algorithm for image reconstruction. The traditional cubic convolution algorithm is a one-parameter, interpolating function. With the second parameter, the algorithm can also be approximating. The analysis leads to a Taylor series expansion for the average square error due to sampling and reconstruction as a function of the two parameters. This analysis indicates that the additional parameter does not improve the reconstruction fidelity - the optimal two-parameter convolution kernel is identical to the optimal kernel for the traditional one-parameter algorithm. Two methods for constructing the optimal cubic kernel are also reviewed.

20 citations


Book
03 Jan 1989
TL;DR: In this paper, the authors describe their experiences with a real-time multiprocessor operating system, called GEM (Generalized Executive for Multiprocessors applications) and with an extension to GEM, called CHAOS (Concurrent Hierarchical Adaptable Object System).
Abstract: We describe our experiences with a real-time multiprocessor operating system, called GEM (Generalized Executive for Multiprocessor applications) and with an extension to GEM, called CHAOS (Concurrent Hierarchical Adaptable Object System). CHAOS offers kernel-level primitives that allow high-performance, large-scale, real-time software to be programmed as a system of interacting objects. This significantly improves modularity, reconfigurability, and maintainability.

17 citations


Proceedings ArticleDOI
04 Jun 1989
TL;DR: This paper presents the dual-form of the gray-scale representation of G. Matheron's filter-representation theorem for black-and-white images, which states that any morphological filter can be represented as a union of erosions by elements in the filter's kernel.
Abstract: One of the classic results of mathematical morphology is the filter-representation theorem of G. Matheron (1975) for black-and-white images. The theorem states that any morphological filter can be represented as a union of erosions by elements in the filter's kernel. In its dual form, it states that the erosion representation can be replaced by an intersection of dilations by elements of the dual filter's kernel. Here, the dual-form of the gray-scale representation is derived in terms of a minimum of dilations by elements in the dual filter's kernel. >

14 citations


Journal Article
TL;DR: A method for 3 dimensional calculation of dose using FFTs is presented, and the FFT technique is shown to be significantly faster than standard convolution for medium to large TERMA and dose spread array sizes.
Abstract: Currently used radiotherapy treatment planning algorithms based on effective path length or scatter function methods do not model electron ranging from photon interaction sites. The superposition (or convolution) technique does model this effect, which is especially important at higher (linear accelerator) energies since the electron range is significant. Another advantage of this method is that it is conceptually simple and models the physical processes directly, rather than using empirically derived methods. A major disadvantage of superposition lies in the large amount of computer time required to generate a plan, especially in three dimensions. To help solve this problem, superposition using an invariant dose spread array (kernel) can be achieved by performing a convolution in Fourier space using fast Fourier transforms (FFTs). A method for 3 dimensional calculation of dose using FFTs is presented. Dose spread arrays are calculated using the EGS Monte Carlo code, and convolved with the TERMA (total energy released per unit mass). In both cases a 10 MV nominal beam energy is modelled by a 10 component spectrum, which is compared to the result obtained using monochromatic energy only (3.0 MeV at the surface). The FFT technique is shown to be significantly faster than standard convolution for medium to large TERMA and dose spread array sizes. The method is shown to be highly accurate for small fields in homogeneous media. For larger fields the central axis depth dose is accurate but the profile shape in the penumbral region becomes slightly distorted. This is because photons incident near the beam edges are not parallel to the cartesian coordinate system used as the convolution framework. However, this effect is sufficiently small to indicate that the convolution method is suitable for use in routine treatment planning.

13 citations


Journal ArticleDOI
TL;DR: In this article, the numerical solution by product integration of weakly singular Fredholm integral equations of the second kind with symmetric difference kernels was discussed, where the product integration method uses a piecewise polynomial, in general, at most, continuous at the knots.
Abstract: This paper discusses the numerical solution by product integration of weakly singular Fredholm integral equations of the second kind with symmetric difference kernels. The product integration method uses a piecewise polynomial, in general, at most, continuous at the knots. A main result of the paper is to show that, owing to the difference kernel, a highly patterned linear system of equations arises if the knots are equally spaced. Specifically the order-N coefficient matrix is block-Toeplitz or a generalization, and centrosymmetric. An algorithm to solve this linear system in O(N 2 ) operations is presented

Journal ArticleDOI
TL;DR: In this article, an original Lagrangian method to compute "Shape Hessians" or "Shape directional second derivatives" is presented, based on the extension of earlier results to semiconvex cost functions.

Proceedings ArticleDOI
30 Oct 1989
TL;DR: The authors give convolution theorems that relate the spherical transform to convolution, sampling theorem that allow the exact computation of the transform for band-limited functions, and algorithms with asymptotically improved running time for the exact computations of the harmonic expansion.
Abstract: The problem of computing the convolution of two functions on the sphere by means of a spherical transform is considered. Such convolutions are applicable to surface recognition and the location of both rotated and translated patterns in an image. The authors give convolution theorems that relate the spherical transform to convolution, sampling theorems that allow the exact computation of the transform for band-limited functions, and algorithms with asymptotically improved running time for the exact computation of the harmonic expansion. The net result is an O(n/sup 1.5/(log n)/sup 2/) algorithm for the exact computation of the convolution of two bandlimited functions sampled at n points in accordance with the sampling theorem. The techniques developed are applicable to computing other transforms, such as the Laguerre, Hermite, and Hankel transforms. >

Patent
26 Jul 1989
TL;DR: In this article, an electrostatic separating machine for separating kernel and peel of pine seed, hazelnut, core of the apricot, walnut and other particles after the shell is broken.
Abstract: The utility model provides an electrostatic separating machine for kernel and peel belonging to the electrostatic applied technology. The machine is used to separate kernel and peel of pine seed, hazelnut, core of the apricot, walnut and other particles after the shell is broken. The two sides of the separating box are provided with a positive electric field and a negative electric field with different intensity. A vertical channel is formed in the middle. When the material passes the passage, because of the different polarization produced by the difference of the dielectric property of the husk and the kernel, a different offset is produced in the direction of the high field and kernel and peel can be separated with a high precision and a high efficiency. The separated kernel and peel can not be affected by the mechanical force, the size, the shape and specific gravity to product the clean-cut separation. During the separating process, the nucleus can not continue to break. The utility model is especially suitable for pine seeds and kernel which is easy to be broken by adopting the pneumatic separation, screening, centrifugation, etc.

01 Jan 1989
TL;DR: In this article, a multiresolution technique was proposed to detect roads in Landsat images, which can extract information about roads at several levels of detail, such as filling road gaps, deleting short road-like segments, and enhancing long and low contrast roads.
Abstract: Automatic interpretation of Landsat images is valuable for various applications. Despite the fact that it has attracted attention for decades from researchers in image processing, artificial intelligence, and pattern recognition, limited progress has been achieved and numerous problems have yet to be solved before the entire image interpretation process can be automated. Image registration of Landsat Thematic Mapper (TM) scenes with translational and rotational differences is studied. These images could also differ in the acquisition dates. We concentrate on two major steps of image registration: control point selection and control point matching. In control point selection, we first define the properties that a good control point should satisfy, and then suggest several methods for extracting them from the input image. In control point matching, we improve a relaxation algorithm previously proposed in the literature by reducing its time complexity from O($n\sp4$) to O($n\sp3$) where $n$ is the number of control points. Experimental results on Landsat 4 and 5 images show that the proposed method produces results comparable to those obtained by an experienced photointerpreter. We propose a multiresolution technique to detect roads in Landsat images. Such a technique permits us to extract information about roads at several levels of detail. Major advantages of this method are: filling road gaps, deleting short road-like segments, and enhancing long and low contrast roads. Several Landsat TM images were used in the experiments to evaluate the road detection algorithm developed here. Landsat images are quite complex and require a large amount of auxiliary knowledge in their interpretation. We attack this problem by first constructing a hierarchical structure for the major land cover types in the study area. Then we develop an algorithm to automate the process of spectral rule generation to identify interpretation of the image. Finally, a spatial clustering segmentation technique, kernel image information, and the spectral and spatial rules are combined to provide a more detailed interpretation of the image. Despite the progress that has been achieved in this Thesis, it is unlikely that fully automatic Landsat image interpretation can be realized in the near future. Continuing research in knowledge based image segmentation is needed to make Landsat image interpretation more successful.

Journal ArticleDOI
TL;DR: In this article, an identification procedure based on Volterra-Wiener functional expansion representations of a class of nonlinear systems often encountered in practice is presented, which can be used for modelling and validation studies of this class of systems from experimental input-output data.

01 Jan 1989
TL;DR: This dissertation develops a small-kernel image restoration algorithm that minimizes expected mean-square restoration error and describes an original method for accurately determining this characterization of the image acquisition device.
Abstract: The goal of image restoration is to remove degradations that are introduced during image acquisition and display. Although image restoration is a difficult task that requires considerable computation, in many applications the processing must be performed significantly faster than is possible with traditional algorithms implemented on conventional serial architectures. As demonstrated in this dissertation, digital image restoration can be efficiently implemented by convolving an image with a small kernel. Small-kernel convolution is a local operation that requires relatively little processing and can be easily implemented in parallel. A small-kernel technique must compromise effectiveness for efficiency, but if the kernel values are well-chosen, small-kernel restoration can be very effective. This dissertation develops a small-kernel image restoration algorithm that minimizes expected mean-square restoration error. The derivation of the mean-square-optimal small kernel parallels that of the Wiener filter, but accounts for explicit spatial constraints on the kernel. This development is thorough and rigorous, but conceptually straightforward: the mean-square-optimal kernel is conditioned only on a comprehensive end-to-end model of the imaging process and spatial constraints on the kernel. The end-to-end digital imaging system model accounts for the scene, acquisition blur, sampling, noise, and display reconstruction. The determination of kernel values is directly conditioned on the specific size and shape of the kernel. Experiments presented in this dissertation demonstrate that small-kernel image restoration requires significantly less computation than a state-of-the-art implementation of the Wiener filter yet the optimal small-kernel yields comparable restored images. The mean-square-optimal small-kernel algorithm and most other image restoration algorithms require a characterization of the image acquisition device (i.e., an estimate of the device's point spread function or optical transfer function). This dissertation describes an original method for accurately determining this characterization. The method extends the traditional knife-edge technique to explicitly deal with fundamental sampled system considerations of aliasing and sample/scene phase. Results for both simulated and real imaging systems demonstrate the accuracy of the method.

Journal ArticleDOI
G.E. Sotak1, Kim L. Boyer1, J.S. Chen, A. Huertas, G. Medioni 
TL;DR: Although the authors presented a means of decomposing the Laplacian-of-Gaussian (LoG) kernel into the product of a Gaussian and a (smaller) LoG mask, it is contended that the exposition suffers from some inconsistencies and minor errors.
Abstract: In a recent paper by J.S. Chen et al. (ibid., vol.PAMI-9, p.584-90, July 1987) the authors presented a means of decomposing the Laplacian-of-Gaussian (LoG) kernel into the product of a Gaussian and a (smaller) LoG mask. They then proceeded to develop a fast algorithm for convolution which exploits the spatial frequency properties of these operators to allow the image to be decimated (subsampled). Although this approach is both novel and interesting, it is contended that the exposition suffers from some inconsistencies and minor errors. The commenters clarify matters for those who wish to implement this technique. The original authors acknowledge two of the three points raised, and provide further clarification of the other one namely, the claim that the masks (Gaussian and LoG) are too small. >

Journal ArticleDOI
TL;DR: In this paper, the numerical solvability of nonlinear integral equations of Hammerstein type on the half line was considered. And the convergence analysis of the approximate solutions of the above equations was studied in two parts, first with non-convolution kernel and then with convolution kernel.
Abstract: In this paper, we consider the numerical solvability of nonlinear integral equations of Hammerstein type on the half line We study the convergence analysis of the approximate solutions of the above equations in two parts, first with non-convolution kernel and then with convolution kernel. We make use of recently developed linear theory due to Anselone and Sloan [2,3,4] and Chandler and Graham [9J for this purpose.

Proceedings ArticleDOI
20 Sep 1989
TL;DR: A discussion is presented of the solutions proposed by the Nomade project: a language-independent kernel and a specialized tool, mainly dedicated to interface control, which allows tools to work safely in a monoversioned context.
Abstract: A discussion is presented of the solutions proposed by the Nomade project: a language-independent kernel and a specialized tool. The kernel manages coarse-grained objects (from files to abstractions such as modules or subsystems), while the tool manages fine-grained objects (basically the external objects of modules: procedures types, etc.). Concepts and properties of variants and revisions are defined. The kernel, enforcing these properties, allows tools to work safely in a monoversioned context. One of these tools, mainly dedicated to interface control, is then presented. >

Book ChapterDOI
K. Oda1, N. Shimizu1, N. Inoue1, Y. Iba1
01 Mar 1989
TL;DR: This OS, called as OS/CT, is based on the interface specification of CTRON basic OS, which combines kernel and input/output control in a single system.
Abstract: We have been developing a CTRON basic OS on a lap-top workstation. This OS(called as OS/CT) is based on the interface specification of CTRON basic OS(kernel and input/output control).

01 Nov 1989
TL;DR: In this article, the values of the convolution kernel that yield the restoration with minimum expected mean-square error are determined using a frequency analysis of the end-to-end imaging system.
Abstract: Image restoration can be implemented efficiently by calculating the convolution of the digital image and a small kernel during image acquisition. Processing the image in the focal-plane in this way requires less computation than traditional Fourier-transform-based techniques such as the Wiener filter and constrained least-squares filter. Here, the values of the convolution kernel that yield the restoration with minimum expected mean-square error are determined using a frequency analysis of the end-to-end imaging system. This development accounts for constraints on the size and shape of the spatial kernel and all the components of the imaging system. Simulation results indicate the technique is effective and efficient.

Journal ArticleDOI
TL;DR: In this paper, an optimised algorithm for real-time implementation of the discrete Wigner distribution (DWD) is presented, which makes use of the symmetry properties in the Wigneer kernel and computes the analytic signal recursively in the time domain.
Abstract: In the letter an optimised algorithm for real-time implementation of the discrete Wigner distribution (DWD) is presented. The algorithm makes use of the symmetry properties in the Wigner kernel and computes the analytic signal recursively in the time domain. Its computation complexity is found to be very small compared with the direct implementation of the DWD when real-time processing is needed.

Book ChapterDOI
01 Mar 1989
TL;DR: The TRON project has created a subset of the ITRON specification, designated as the μITRON specification, to accommodate single-chip microcomputer devices incorporating 8- and 16-bit microprocessors and supporting modules on a single chip with an application-specific architecture.
Abstract: The ITRON specification was originally designed for general-purpose 16-bit microprocessors. Recent years, however, have seen the appearance of ASICs incorporating 8- and 16-bit microprocessors and supporting modules on a single chip with an application-specific architecture. To accommodate these single-chip microcomputer devices, the TRON project has created a subset of the ITRON specification, designated as the μITRON specification.

Proceedings ArticleDOI
01 Nov 1989
TL;DR: In this paper, the authors present an architecture for two VLSI ICs, an 8-bit and 12-bit version, which execute real-time 3x3 kernel image convolutions at rates exceeding 10 ms per 512x512 pixel frame (at a 30 MHz external clock rate).
Abstract: This paper presents a novel architecture for two VLSI ICs, an 8-bit and 12-bit version, which execute real-time 3x3 kernel image convolutions at rates exceeding 10 ms per 512x512 pixel frame (at a 30 MHz external clock rate). The ICs are capable of performing "on-the-fly" convolutions of images without any need for external input image buffers. Both symmetric and asymmetric coefficient kernels are supported, with coefficient precision up to 12 bits. Nine on-chip multiplier-accumulators maintain double-precision accuracy for maximum precision of the results and minimum roundoff noise. In addition, an on-chip ALU can be switched into the pixel datapath to perform simultaneous pixel-point operations on the incoming data. Thus, operations such as thresholding, inversion, shifts, and double frame arithmetic can be performed on the pixels with no extra speed penalty. Flexible internal datapaths of the processors provide easy means for cascadability of several devices if larger image arrays need to be processed. Moreover, larger convolution kernels, such as 6x6, can easily be supported with no speed penalty by employing two or more convolvers. On-chip delay buffers can be programmed to any desired raster line width up to 1024 pixels. The delay buffers may also be bypassed when direct "Sum-Of-Products" operation of the multipliers is required; such as when external frame buffer address sequencing is desired. These features make the convolvers suitable for applications such as affine and bilinear interpolation, one-dimensional convolution (FIR filtration), and matrix operations. Several examples of applications illustrating stand-alone and cascade mode operation of the ICs will be discussed

Proceedings ArticleDOI
20 Sep 1989
TL;DR: It is proved that encoding a decimal code is a constant time, that the worst-case time complexity of compressing the decimal codes is O(n+m/sup 2/), and that the size of the data structure is proportional to m.
Abstract: A decimal notation satisfies many simple mathematical properties and it is a useful tool in the analysis of trees. A practical method is presented that compresses the decimal codes while maintaining the fast determination of relations (e.g. ancestor, descendant, brother, etc.). A special node, called a kernel node, including many common subcodes of the other codes, is defined and a compact data structure is presented using the kernel nodes. For the case where n(m) is the number of the total (kernel) nodes, it is proved that encoding a decimal code is a constant time, that the worst-case time complexity of compressing the decimal codes is O(n+m/sup 2/), and that the size of the data structure is proportional to m. From the experimental results for some hierarchical semantic primitives for natural language processing, it is shown that the ratio m/n is extremely small, ranging from 0.047 to 0.13. >

Proceedings ArticleDOI
23 May 1989
TL;DR: The proposed interpolation formula differs from standard techniques such as cubic spline convolution in that the image samples are modified by a discrete convolution operator prior to the reconstruction summation.
Abstract: A novel method for the interpolation of sampled images is presented. It makes use of a recently discovered formula for the least-squares projection of an arbitrary function onto a repetitive basis. The proposed interpolation formula differs from standard techniques such as cubic spline convolution in that the image samples are modified by a discrete convolution operator prior to the reconstruction summation. The visual performance of the method is shown to be superior to that of cubic spline convolution, which is the best current algorithm. The main attraction of the method is that the algorithm is automatically tailored to the spatial resolution of the image sensor. The exact computational cost of the method, in terms of reconstruction sum size, depends on the sensor PSF (point spread function) but is likely to be only slightly greater than that for spline convolution. All the results given hold good in a general N-dimensional space. >

Proceedings ArticleDOI
08 May 1989
TL;DR: Using the recursive algorithm for a GQRNS CNTT and an overlap of one data point, an extremely efficient algorithm is obtained that is far superior to the normal FFT-based convolution algorithms.
Abstract: Generalized quadratic residue number systems (GQRNS) offer the possibility of real-time convolution using highly parallel real integer arithmetic. Using carefully chosen moduli, an overlap-and-save or overlap-and-add algorithm can be implemented that is far superior to the normal FFT-based convolution algorithms. Using the recursive algorithm for a GQRNS CNTT and an overlap of one data point, an extremely efficient algorithm is obtained. A 216-point GQRNS convolution can be implemented with less hardware and higher speed than a 128-point FFT-based convolution. Similarly, a 1296-point GQRNS convolution can be implemented with less hardware and higher speed than a 1024-point FFT-based convolution. >

Journal ArticleDOI
01 Feb 1989

Proceedings ArticleDOI
03 Jan 1989
TL;DR: Parallel algorithms for programming low-level vision mechanisms on the JPL-Caltech hypercube are reported, and work in progress includes the application of a Hopfield neural net approach to region finding.
Abstract: Parallel algorithms for programming low-level vision mechanisms on the JPL-Caltech hypercube are reported. These concern principally edge and region finding. 256x256 8bit images were used.We discuss the problem of programming a hypercube computer, and the Caltech approach to load balancing. We then discuss the distribution of images over the hypercube and the I/O problem for images.In edge finding, we programmed convolution using a separable kernel computational approach. This was tested with 5x5 and 32x32 masks.In region finding, we developed two different parallel histogram techniques. The first finds a global histogram for the image by a completely parallel technique. This method, which was developed from the Fox-Furmanski scalar product method, allows each histogram bucket to be computed by a separate processor, each processor regarding the hypercube as a different tree, and all buckets being computed in parallel by a complete interleaving of all communications required. Similarly the global histogram can then be distributed over the hypercube, so that all processors have the entire global histogram, by an completely parallel technique.The second histogramming method finds a spatially local histogram within each processor and then connects locally found regions together.Work in progress includes the application of a Hopfield neural net approach to region finding.