scispace - formally typeset
Search or ask a question

Showing papers on "Kernel (image processing) published in 2000"


Journal ArticleDOI
TL;DR: In this paper, the authors generalize the theory to the case of space-varying kernels and show that the CPU cost required for this new extension of the method is almost the same as for fitting a constant kernel solution.
Abstract: Image subtraction is a method by which one image is matched against another by using a convolution kernel, so that they can be differenced to detect and measure variable objects. It has been demonstrated that constant optimal-kernel solutions can be derived over small sub-areas of dense stellar fields. Here we generalize the theory to the case of space-varying kernels. In particular, it is shown that the CPU cost required for this new extension of the method is almost the same as for fitting a constant kernel solution. It is also shown that constant flux scaling between the images (constant kernel integral) can be imposed in a simple way. The method is demonstrated with a series of Monte-Carlo images. Differential PSF variations and differential rotation between the images are simulated. It is shown that the new method is able to achieve optimal results even in these difficult cases, thereby automatically correcting for these common instrumental problems. It is also demonstrated that the method does not suffer due to problems associated with under-sampling of the images. Finally, the method is applied to images taken by the OGLE II collaboration. It is proved that, in comparison to the constant-kernel method, much larger sub-areas of the images can be used for the fit, while still maintaining the same accuracy in the subtracted image. This result is especially important in case of variables located in low density fields, like the Huchra lens. Many other useful applications of the method are possible for major astrophysical problems; Supernova searches and Cepheids surveys in other galaxies, to mention but two. Many other applications will certainly show-up, since variability searches are a major issue in astronomy.

947 citations


Journal ArticleDOI
TL;DR: In this article, the authors used body wave ray theory in conjunction with the Born approximation to compute 3D Frechet kernels for finite-frequency seismic traveltimes, measured by cross-correlation of a broad-band waveform with a spherical earth synthetic seismogram.
Abstract: Summary We use body wave ray theory in conjunction with the Born approximation tocompute 3-D Frechet kernels for finite-frequency seismic traveltimes, measured by cross-correlation of a broad-band waveform with a spherical earth synthetic seismogram. Destructive interference among adjacent frequencies in the broad-band pulse renders a cross-correlation traveltime measurement sensitive only to the wave speed in a hollow banana-shaped region surrounding the unperturbed geometrical ray. The Frechet kernel expressing this sensitivity is expressed as a double sum over all forward-propagating body waves from the source and backward-propagating body waves from the receiver to every single scatterer in the vicinity of this central ray. The kernel for a differential traveltime, measured by cross-correlation of two phases at the same receiver, is simply the difference of the respective single-phase kernels. In the paraxial approximation, an absolute or differential traveltime kernel can be computed extremely economically by implementing a single kinematic and dynamic ray trace along each source-to-receiver ray.

656 citations


Proceedings ArticleDOI
10 Sep 2000
TL;DR: This work investigates a generalization of PCA, kernel principal component analysis (kernel PCA), for learning low dimensional representations in the context of face recognition and shows that kernel PCA outperforms the eigenface method in face recognition.
Abstract: Eigenface or principal component analysis (PCA) methods have demonstrated their success in face recognition, detection, and tracking. The representation in PCA is based on the second order statistics of the image set, and does not address higher order statistical dependencies such as the relationships among three or more pixels. Higher order statistics (HOS) have been used as a more informative low dimensional representation than PCA for face and vehicle detection. We investigate a generalization of PCA, kernel principal component analysis (kernel PCA), for learning low dimensional representations in the context of face recognition. In contrast to HOS, kernel PCA computes the higher order statistics without the combinatorial explosion of time and memory complexity. While PCA aims to find a second order correlation of patterns, kernel PCA provides a replacement which takes into account higher order correlations. We compare the recognition results using kernel methods with eigenface methods on two benchmarks. Empirical results show that kernel PCA outperforms the eigenface method in face recognition.

269 citations


Journal ArticleDOI
TL;DR: This work focuses chiefly on some aspects of practical implementation and numerical examples on which the approximation time was found to grow almost linearly in the matrix size.
Abstract: The mosaic-skeleton method was bred in a simple observation that rather large blocks in very large matrices coming from integral formulations can be approximated accurately by a sum of just few rank-one matrices (skeletons). These blocks might correspond to a region where the kernel is smooth enough, and anyway it can be a region where the kernel is approximated by a short sum of separable functions (functional skeletons). Since the effect of approximations is like that of having small-rank matrices, we find it pertinent to say about mosaic ranks of a matrix which turn out to be pretty small for many nonsingular matrices. On the first stage, the method builds up an appropriate mosaic partitioning using the concept of a tree of clusters and some extra information rather than the matrix entries (related to the mesh). On the second stage, it approximates every allowed block by skeletons using the entries of some rather small cross which is chosen by an adaptive procedure. We focus chiefly on some aspects of practical implementation and numerical examples on which the approximation time was found to grow almost linearly in the matrix size.

247 citations


Journal ArticleDOI
TL;DR: Application of the new procedure to practical diffraction-related phenomena, like in-line holography, improves the processing efficiency without creating any associated artifacts on the reconstructed-object pattern.
Abstract: When optical signals, like diffraction patterns, are processed by digital means the choice of sampling density and geometry is important during analog-to-digital conversion. Continuous band-limited signals can be sampled and recovered from their samples in accord with the Nyquist sampling criteria. The specific form of the convolution kernel that describes the Fresnel diffraction allows another, alternative, full-reconstruction procedure of an object from the samples of its diffraction pattern when the object is space limited. This alternative procedure is applicable and yields full reconstruction even when the diffraction pattern is undersampled and the Nyquist criteria are severely violated. Application of the new procedure to practical diffraction-related phenomena, like in-line holography, improves the processing efficiency without creating any associated artifacts on the reconstructed-object pattern.

114 citations


Journal ArticleDOI
TL;DR: The implementation, accuracy and performance of the FFT convolution (FFTC) and multigrid superposition (MGS) algorithms are presented and it is shown that the FFTC model is faster than MGS by a factor of 4 and 8 for small and large field sizes, respectively.
Abstract: In radiotherapy treatment planning, convolution/superposition algorithms currently represent the best practical approach for accurate photon dose calculation in heterogeneous tissues. In this work, the implementation, accuracy and performance of the FFT convolution (FFTC) and multigrid superposition (MGS) algorithms are presented. The FFTC and MGS models use the same `TERMA' calculation and are commissioned using the same parameters. Both models use the same spectra, incorporate the same off-axis softening and base incident lateral fluence on the same measurements. In addition, corrections are explicitly applied to the polyenergetic and parallel kernel approximations, and electron contamination is modelled. Spectra generated by Monte Carlo (MC) modelling of treatment heads are used. Calculations using the MC spectra were in excellent agreement with measurements for many linear accelerator types. To speed up the calculations, a number of calculation techniques were implemented, including separate primary and scatter dose calculation, the FFT technique which assumes kernel invariance for the convolution calculation and a multigrid (MG) acceleration technique for the superposition calculation. Timing results show that the FFTC model is faster than MGS by a factor of 4 and 8 for small and large field sizes, respectively. Comparisons with measured data and BEAM MC results for a wide range of clinical beam setups show that (a) FFTC and MGS doses match measurements to better than 2% or 2 mm in homogeneous media; (b) MGS is more accurate than FFTC in lung phantoms where MGS doses are within 3% or 3 mm of BEAM results and (c) FFTC overestimates the dose in lung by a maximum of 9% compared to BEAM.

114 citations



Journal ArticleDOI
TL;DR: VARIANT is described, an extensible system for processing and visualizing terrains represented through Triangulated Irregular Networks (TINs), featuring the accuracy of the representation, possibly variable over the terrain domain, as a further parameter in computation.
Abstract: We describe VARIANT (VAriable Resolution Interactive ANalysis of Terrain), an extensible system for processing and visualizing terrains represented through Triangulated Irregular Networks (TINs), featuring the accuracy of the representation, possibly variable over the terrain domain, as a further parameter in computation. VARIANT is based on a multiresolution terrain model, which we developed in our earlier research. Its architecture is made of a kernel, which provides primitive operations for building and querying the multiresolution model; and of application programs, which access a terrain model based on the primitives in the kernel. VARIANT directly supports basic queries (e.g., windowing, buffering, computation of elevation at a given point, or along a given line) as well as high-level operations (e.g., fly-over visualization, contour map extraction, viewshed analysis). However, the true power of VARIANT lies in the possibility of extending it with new applications that can exploit its multiresolution features in a transparent way.

55 citations


Proceedings ArticleDOI
11 Sep 2000
TL;DR: The Specware formal development methodology that was used in the development of MASK is illustrated and the results of the MASK development process are described, project successes are discussed, and related MASK research is highlighted.
Abstract: Describes the formal specification and development of a separation kernel. The Mathematically Analyzed Separation Kernel (MASK), has been used by Motorola on a smartcard project, and as part of a hardware cryptographic platform called the Advanced INFOSEC (INFOrmation SECurity) Machine (AIM). Both MASK and AIM were jointly developed by Motorola and the National Security Agency (NSA). This paper first describes the separation kernel concept and its importance to information security. Next, it illustrates the Specware formal development methodology that was used in the development of MASK. Experiences and lessons learned from this formal development process are discussed. Finally, the results of the MASK development process are described, project successes are discussed, and related MASK research is highlighted.

52 citations


Journal ArticleDOI
TL;DR: It can be proved, under some general assumptions of spatial linearity, that the disturbance induced in the measurement by the effect of the finite size of the detector is equal to the convolution of the real profile with a representative kernel of the detectors.
Abstract: One of the most important aspects in the metrology of radiation fields is the problem of the measurement of dose profiles in regions where the dose gradient is large. In such zones, the 'detector size effect' may produce experimental measurements that do not correspond to reality. Mathematically it can be proved, under some general assumptions of spatial linearity, that the disturbance induced in the measurement by the effect of the finite size of the detector is equal to the convolution of the real profile with a representative kernel of the detector. In this work the exact relation between the measured profile and the real profile is shown, through the analytical resolution of the integral equation for a general type of profile fitting function using Gaussian convolution kernels.

51 citations


01 Jan 2000
TL;DR: This paper presents experimental comparisons of various image representations for object detection using kernel classifiers, and presents a feature selection method using SVMs, and shows experimental results.
Abstract: This paper presents experimental comparisons of various image representations for object detection using kernel classifiers In particular it discusses the use of support vector machines (SVM) for object detection using as image representations raw pixel values, projections onto principal components, and Haar wavelets General linear transformations of the images through the choice of the kernel of the SVM are considered Experiments showing the effects of histogram equalization, a non-linear transformation, are presented Image representations derived from probabilistic models of the class of images considered, through the choice of the kernel of the SVM, are also evaluated Finally, we present a feature selection method using SVMs, and show experimental results

Journal ArticleDOI
TL;DR: In this article, two techniques, integrating texture and spatial context properties for the classification of fine spatial resolution imagery from the city of Athens (Hellas) have been tested in terms of accuracy and class specificity.
Abstract: Two techniques, integrating texture and spatial context properties for the classification of fine spatial resolution imagery from the city of Athens (Hellas) have been tested in terms of accuracy and class specificity. Both techniques were kernel based, using an artificial neural network and the kernel reclassification algorithm. The study demonstrated the high potential of the kernel classifiers to discriminate residential categories on 5 m-spatial resolution imagery. The overall accuracy percentages achieved were 73.44% and 74.22% respectively, considering a seven-class classification scheme. The adopted scheme was subset of the nomenclature referred to as 'Classification for Land Use Statistics Eurostat's Remote Sensing programme' (CLUSTERS) used by the Statistical Office of the European Communities (EUROSTAT) to map urban and rural environment.

Proceedings ArticleDOI
H. Sakano1, N. Mukawa
30 Aug 2000
TL;DR: KMS is proposed, theoretical aspects of the proposed method are described, and the results of facial image recognition experiments are presented.
Abstract: A multiple observation-based scheme (MObS) is described for robust facial recognition, and a novel object recognition method called kernel mutual subspace method (KMS) is proposed. The mutual sub-space method (MSM) proposed by (Maeda, et al., 1999) is a powerful method for recognizing facial images. However, its recognition accuracy is degraded when the data distribution has a nonlinear structure. To overcome this shortcoming we apply kernel principal component analysis (kPCP) to MSM. This paper describes theoretical aspects of the proposed method and presents the results of facial image recognition experiments.

Journal ArticleDOI
TL;DR: The generalization of discrete cyclic convolution in convolution over wreath product cyclic groups is described and how it reduces to multiplication in the spectral domain is shown.
Abstract: For pt.I see ibid., vol.48, no.1, p.102-32 (2000). This paper continues the investigation of the use of spectral analysis on certain noncommutative finite groups-wreath product groups-in digital signal processing. We describe the generalization of discrete cyclic convolution in convolution over these groups and show how it reduces to multiplication in the spectral domain. Finite group-based convolution is defined in both the spatial and spectral domains and its properties established. We pay particular attention to wreath product cyclic groups and further describe convolution properties from a geometric view point in terms of operations with specific signals and filters. Group-based correlation is defined in a natural way, and its properties follow from those of convolution (the detection of similarity of perceptually similar signals) and an application of correlation (the detection of similarity of group-transformed signals). Several examples using images are included to demonstrate the ideas pictorially.

Patent
Ching-Yu Hung1
20 Dec 2000
TL;DR: In this article, a two-step resizing of filtering an entire image followed by selective row and column deletions is described, where the filtering may use a kernel generated as three samples from a continuous kernel.
Abstract: A digital image two-step resizing of filtering an entire image followed by selective row and column deletions. The filtering may use a kernel generated as three samples from a continuous kernel.

Proceedings Article
01 Jan 2000
TL;DR: An algorithm based on density estimation techniques that applies an energy preserving adaptive kernel filter to individual samples during image rendering, ensuring a reasonable noise versus bias trade-off at any time.
Abstract: Image filtering is often applied as a post-process to Monte Carlo generated pictures, in order to reduce noise. In this paper we present an algorithm based on density estimation techniques that applies an energy preserving adaptive kernel filter to individual samples during image rendering. The used kernel widths diminish as the number of samples goes up, ensuring a reasonable noise versus bias trade-off at any time. This results in a progressive algorithm, that still converges asymptotically to a correct solution. Results show that general noise as well as spike noise can effectively be reduced. Many interesting extensions are possible, making this a very promising technique for Monte Carlo image synthesis.

Book ChapterDOI
13 Sep 2000
TL;DR: Experimental results prove that improvements in both power and performance can be acquired, when the right combination of data memory architecture model and data-reuse transformation is selected.
Abstract: Exploitation of data re-use in combination with the use of custom memory hierarchy that exploits the temporal locality of data accesses may introduce significant power savings, especially for data-intensive applications. The effect of the data-reuse decisions on the power dissipation but also on area and performance of multimedia applications realized on multiple embedded cores is explored. The interaction between the data-reuse decisions and the selection of a certain data-memory architecture model is also studied. As demonstrator a widely-used video processing algorithmic kernel, namely the full search motion estimation kernel, is used. Experimental results prove that improvements in both power and performance can be acquired, when the right combination of data memory architecture model and data-reuse transformation is selected.

Proceedings Article
01 Sep 2000
TL;DR: An evaluation of convolution-based interpolation methods and rigid transformations for applying geometrical transformations to medical images shows that for all modalities, spline interpolation constitutes the best trade-off between accuracy and computational cost, and therefore is to be preferred over all other methods.
Abstract: Interpolation is required in a variety of medical image processing applications. Although many interpolation techniques are known from the literature, evaluations of these techniques for the specific task of applying geometrical transformations to medical images are still lacking. In this paper we present such an evaluation. We consider convolution-based interpolation methods and rigid transformations. The evaluation involves a large number of sine-approximating kernels, including piecewise polynomial and windowed sine kernels, and images from a wide variety of medical image modalities. The results show that for all modalities, spline interpolation constitutes the best trade-off between accuracy and computational cost, and therefore is to be preferred over all other methods.

Patent
29 Aug 2000
TL;DR: In this paper, an image analyzer manipulates the filter kernel as a function of the image parameters so that the system produces a filtered image, adaptable in real time, as a result of the unfiltered image, external rules, predetermined constraints, or combinations thereof.
Abstract: A system for adaptively filtering an image so as to reduce a noise component associated with the image includes an image analyzer for determining image parameters related to the image. The system also includes a spatial filter, having an adjustable kernel responsive to the image parameters, for filtering the image sequence. The image analyzer manipulates the filter kernel as a function of the image parameters so that the system produces a filtered image, adaptable in real time, as a function of the unfiltered image, external rules, predetermined constraints, or combinations thereof. The spatial filter includes a time-invariant section and an adaptable section. The time-invariant section preferably applies a plurality of filters to the image, each of the filters having a distinct frequency response, so as to produce a plurality of distinct filtered outputs. The adaptable section scales each of the plurality of distinct filtered outputs with a corresponding distinct weighting value to produce a plurality of scaled filtered outputs, and combines the plurality of scaled filtered outputs to produce a composite filtered output.

Book
27 Sep 2000
TL;DR: Systems Development: from Geodatabase Kernel Systems to Component-Based 3D/4D Geoinformation Systems and Data and Methods Integration and Systems Integration.
Abstract: Fundamental Principles- Examples of Today's Geoinformation Systems- Data Modelling and Management for 3D/4D Geoinformation Systems- Systems Development: from Geodatabase Kernel Systems to Component-Based 3D/4D Geoinformation Systems- Data and Methods Integration- Systems Integration- Outlook

Journal ArticleDOI
TL;DR: This paper proposes a new switching algorithm having the advantage that the search is over a smaller set than other algorithms, and the degree of relaxation serves as an input parameter to the algorithm, so that computation time can be bounded for large windows and the algorithm can run to full optimality for small windows.

Journal ArticleDOI
01 Dec 2000
TL;DR: LiTransit kernel as discussed by the authors is a semi-empirical kernel-driven BRDF model with a geometrical kernel that improves the physical parameterization of geometric optical effects while keeping the ability to fit the data well and to calculate accurate albedos.
Abstract: Kernel‐driven bidirectional reflectance distribution function (BRDF) models are widely used in the description of BRDF properties of land types. To further improve the ability of kernel‐driven semiempirical models to capture the BRDF of the surface, a geometrical kernel, the LiTransit kernel, has recently been derived. The LiTransit kernel strives to improve the physical parameterization of geometric‐optical effects while keeping the ability to fit the data well and to calculate accurate albedos. It is part of our continuing work on the enhancement of semiempirical kernel‐driven BRDF models. We tested the new kernel's performance using ground, airborne and spaceborne bidirectional measurements obtained from various sources. The results show that the LiTransit kernel retains the ability to fit BRDF shapes in sparsely vegetated regions as well as the LiSparse‐Reciprocal kernel while providing continuity in the transition from sparse to dense vegetation covers. We have also tested the new kernel combination ...

Journal ArticleDOI
TL;DR: The reproducing kernel particle method (RKPM) has many attractive properties that make it ideal for treating a broad class of physical problems and provides a framework for performing hierarchical computations making it an ideal candidate for simulating multi-scale problems.
Abstract: The reproducing kernel particle method (RKPM) has many attractive properties that make it ideal for treating a broad class of physical problems. RKPM may be implemented in a ‘mesh-full’ or a ‘mesh-free’ manner and provides the ability to tune the method, via the selection of a window function and its associated dilation parameter, in order to achieve the requisite numerical performance. RKPM also provides a framework for performing hierarchical computations making it an ideal candidate for simulating multi-scale problems. Although the method has many appealing attributes, it is quite new and its numerical performance is still being quantified with respect to more traditional discretization techniques. In order to assess the numerical performance of RKPM, detailed studies of the method on a series of model partial differential equations has been undertaken. The results of von Neumann analyses for RKPM semi-discretizations of one and two-dimensional, first- and second-order wave equations are presented in the form of phase and group errors. Excellent dispersion characteristics are found for the consistent mass matrix with the proper choice of dilation parameter. In contrast, row-sum lumping the mass matrix is demonstrated to introduce severe lagging phase errors. A ‘higher-order’ mass matrix improves the dispersion characteristics relative to the lumped mass matrix but also yields significant lagging phase errors relative to the fully integrated, consistent mass matrix. Published in 2000 by John Wiley & Sons, Ltd.


Journal ArticleDOI
TL;DR: High performance in two-dimensional convolution and other algo- rithms on the MAP1000 clearly demonstrates the feasibility of software-based solutions in demanding imaging and video applica- tions.
Abstract: Programmable media processors have been emerging to meet the continuously increasing computational demand in complex digital media applications, such as HDTV and MPEG-4, at an afford- able cost. These media processors provide the flexibility to imple- ment various image computing algorithms along with high perfor- mance, unlike the hardwired approach that has provided high performance for a particular algorithm, but lacks flexibility. However, to achieve high performance on these media processors, a careful and sometimes innovative design of algorithms is essential. In ad- dition, programming techniques, e.g., software pipelining and loop unrolling, are needed to speed up the computations while the data flow can be optimized using a programmable DMA controller. In this paper, we describe an algorithm for two-dimensional convolution, which can be implemented efficiently on many media processors. Implemented on a new media processor called the MAP1000, it takes 7.9 ms to convolve a 5123512 image with a 737 kernel, which is much faster than the previously reported software-based convolution and is comparable with the hardwired implementations. High performance in two-dimensional convolution and other algo- rithms on the MAP1000 clearly demonstrates the feasibility of software-based solutions in demanding imaging and video applica- tions. © 2000 SPIE and IS&T. (S1017-9909(00)00203-8)

Proceedings ArticleDOI
01 May 2000
TL;DR: A new kernel design which uses floating-point filter techniques and a lazy evaluation schem e with the exact number types provided by LEDA allowing for efficien t and exact computation with rational and algebraic geometri c objects is described.
Abstract: In this paper we describe and discuss a new kernel design for g eometric computation in the plane. It combines different kin ds of floating-point filter techniques and a lazy evaluation schem e with the exact number types provided by LEDA allowing for efficien t and exact computation with rational and algebraic geometri c objects. It is the first kernel design which uses floating-point filter t chniques on the level of geometric constructions. The experiments we present – partly using the CGAL framework – show a great improvement in speed and – maybe even more important for practical applications – memory consumption when dealing with more complex geometric computations.

Proceedings ArticleDOI
13 Jul 2000
TL;DR: A new logic circuit design methodology for kernel-based pattern recognition hardware using a genetic algorithm that demonstrates higher recognition accuracy and much faster processing speed than the conventional approaches.
Abstract: We propose a new logic circuit design methodology for kernel-based pattern recognition hardware using a genetic algorithm. In the proposed design methodology, pattern data are transformed into the truth tables and the truth tables are evolved to represent kernels in the discrimination functions for pattern recognition. The evolved truth tables are then synthesized to logic circuits. Because of this data direct implementation approach, no floating point numerical circuits are required and the intrinsic parallelism in the pattern data set is embedded into the circuits. Consequently, high speed recognition systems can be realized with acceptable small circuit size. We have applied this methodology to the image recognition and the sonar spectrum recognition tasks, and implemented them onto the newly developed FPGA-based reconfigurable pattern recognition board. The developed system demonstrates higher recognition accuracy and much faster processing speed than the conventional approaches.

Proceedings ArticleDOI
03 Sep 2000
TL;DR: A new method to detect rotational symmetries, which describes complex curvature such as corners, circles, star, and spiral patterns, which can serve as feature points at a high abstraction level for use in hierarchical matching structures for 3D estimation, object recognition, image database retrieval, etc.
Abstract: Perceptual experiments indicate that corners and curvature are very important features in the process of recognition. This paper presents a new method to detect rotational symmetries, which describes complex curvature such as corners, circles, star, and spiral patterns. It works in two steps: 1) it extracts local orientation from a gray-scale or color image; and 2) it applies normalized convolution on the orientation image with rotational symmetry filters as basis functions. These symmetries can serve as feature points at a high abstraction level for use in hierarchical matching structures for 3D estimation, object recognition, image database retrieval, etc.

Journal ArticleDOI
TL;DR: In this article, a wavelet base based method for capacitance extraction is presented, which takes full advantage of multiresolution analysis and gives accurate total charge on a conductor without obtaining an accurate solution for the charge density per se.
Abstract: A new approach is presented for efficient capacitance extraction. This technique utilizes wavelet bases and is kernel independent. The main benefits of the proposed technique are as follows: (1) it takes a full advantage of the multiresolution analysis and gives accurate total charge on a conductor without obtaining an accurate solution for the charge density per se; (2) the method employs an extremely aggressive thresholding algorithm and compresses the stiffness matrix to an almost diagonal sparse matrix; and (3) construction of the stiffness matrix is performed iteratively, which facilitates easy and simple control of convergence and provides means of trading accuracy for speed. The proposed method has computational cost of O(N), versus O(N/sup 3/) for conventional methods. The proposed algorithm has a major impact on the speed and accuracy of physical interconnect parameter extraction with speedup reaching 10/sup 3/ for even moderately sized problems.

Journal ArticleDOI
TL;DR: The intention is to use the recently developed elliptic systems method, which has been successfully applied by these authors to the problem of imaging biological tissues using lasers, to apply inverse problem techniques to image land mines using an electromagnetic signal originated by ground penetrating radar.