scispace - formally typeset
Search or ask a question

Showing papers on "Wavelet published in 1981"


Journal ArticleDOI
S. Levy1, Peter K. Fullagar1
TL;DR: In this article, an algorithm is proposed for the reconstruction of a sparse spike train from an incomplete set of its Fourier components, which employs linear programming to minimize the L 1 -norm of the output, because minimization of this norm favors solutions with isolated spikes.
Abstract: An algorithm is proposed for the reconstruction of a sparse spike train from an incomplete set of its Fourier components. It is shown that as little as 20–25 percent of the Fourier spectrum is sufficient in practice for a high‐quality reconstruction. The method employs linear programming to minimize the L1‐norm of the output, because minimization of this norm favors solutions with isolated spikes. Given a wavelet, this technique can be used to perform deconvolution of noisy seismograms when the desired output is a sparse spike series. Relative reliability of the data is assessed in the frequency domain, and only the reliable spectral data are included in the calculation of the spike series. Equations for the unknown spike amplitudes are solved to an accuracy compatible with the uncertainties in the reliable data. In examples with 10 percent random noise, the output is superior to that obtained using conventional least‐squares techniques.

257 citations


Journal ArticleDOI
TL;DR: In this article, the authors used the method of Backus and Gilbert to generate localized averages of the model which are unique except for a statistical uncertainty caused by errors in the data.
Abstract: Summary Linear inverse theory is used to deconvolve a data set when the blurring function or source wavelet is (approximately) known. Rather than attempting to find one of infinitely many models which fits the data this paper uses the methods of Backus and Gilbert to generate localized averages of the model which are unique except for a statistical uncertainty caused by errors in the data. The averages, their statistical error and the associated averaging window completely codify our knowledge about the model. Averages with lower standard deviations can be had by sacrificing resolution and the investigator is free to choose those results which are most meaningful. Moreover, this method is optimum in the sense that no other averaging window can be constructed which has greater resolving power and yet produces averages with the same statistical accuracy. Our deconvolution method in the time domain is shown to be very similar to finding the inverse filter of the source wavelet, and indeed the averaging window is simply the convolution of these two functions. However, by investigating the trade-off between resolution and accuracy we have shown that the data errors can be much more important than the parameters of the Wiener optimum inverse filter such as the length of that filter and the desired location of the output spike. In the frequency domain the equations for trading off accuracy and resolution have been developed and the computations are seen particularly to be simple because no matrix inversion is required. Sufficient examples will be presented to show the importance of incorporating the observational errors into the deconvolution procedure. Additionally, we have shown how to reduce the sidelobes of the averaging windows by shaping them into Gaussian functions of a predetermined width, have looked at the effects of using a zero area source function characteristic of seismic problems, and have attempted a deconvolution when the wavelet was only approximately known. The frequency domain deconvolution filter derived here is also compared quantitatively with those derived intuitively by Helmberger and Wiggins and Deregoiwski. Lastly, we show how information in the averages and averaging windows can be used to construct a parametric model, composed of a series of delta functions, which fits the data. Such a model is of importance in seismological and spectroscopic studies.

91 citations


Journal ArticleDOI
TL;DR: In this article, three methods have been presented for constructing a smooth wavelet from either a (possibly poor) estimate of the reflectivity sequence or an approximate inverse filter for the source wavelet.
Abstract: Three methods have been presented for constructing a smooth wavelet from either a (possibly poor) estimate of the reflectivity sequence or an approximate inverse filter for the source wavelet. An approximate reflectivity sequence might be derived from a velocity log at, or near, the site where the normal incidence seismogram was recorded, or it might be equated to the averages obtained from minimum entropy deconvolution (MED). The approximate inverse filter for the source wavelet is provided by MED. All methods performed well when tested on data generated from wavelets of different character, and this provides optimism that these methods will work satisfactorily in a variety of geophysical problems where the data are the convolution of a smooth wavelet and a “spikey” model. The deconvolution problem discussed here is nonunique, and satisfactory wavelet constructions require that some subjectivity be introduced by the investigator. Even so, we present one example where the computed wavelet and reflectivity...

64 citations


Journal ArticleDOI
TL;DR: In this article, the results obtained using state-variable models and techniques on problems for which solutions either cannot be or are not easily obtained via more conventional input-output techniques are presented.
Abstract: This paper demonstrates some results obtained using state‐variable models and techniques on problems for which solutions either cannot be or are not easily obtained via more conventional input‐output techniques. After a brief introduction to state‐variable notions, the following seven problem areas are discussed: modeling seismic source wavelets, simultaneous deconvolution and correction for spherical divergence, simultaneous wavelet estimation and deconvolution, well log processing, design of recursive Wiener filters, Bremmer series decomposition of a seismogram (including suppression of multiples and vertical seismic profiling), and estimating reflection coefficients and traveltimes.

35 citations


Book ChapterDOI
01 Jan 1981
TL;DR: The phase information can be preserved with a new technique that directly searches for an independently distributed innovation (as opposed to the usual search for an uncorrelated, or white innovation) with details of the method, including the case of unevenly sampled data.
Abstract: The practical problems in astronomical time series analysis are sometimes different from those in geophysics and related areas. One such is the determination of the wavelet shape, including phase characteristics, in a moving average process. This problem is of great importance because there are usually competing physical theories which make differing predictions for the wavelet. Conventional techniques discard the phase information, for example by introducing the autocorrelation function. They are therefore unable to determine the correct wavelet shape; an assumption such as minimum delay must be imposed in order for a unique solution to be determined. The phase information can be preserved with a new technique that directly searches for an independently distributed innovation (as opposed to the usual search for an uncorrelated, or white innovation). Details of the method, including the case of unevenly sampled data, are presented, along with results for a synthetic test case.

18 citations


Journal ArticleDOI
D. Bilgeri1, A. Carlini1
TL;DR: In this paper, a method to estimate seismic reflection coefficients has been derived by searching for their amplitude and their time positions without any other limitating assumption, and several advantages are then obtainable from these reflection coefficients, like conversion to interval velocities with an optimum calibration either to the well logs or to the velocity analysis curves.
Abstract: For years, reflection coefficients have been the main aim of traditional deconvolution methods for their significant informational content. A method to estimate seismic reflection coefficients has been derived by searching for their amplitude and their time positions without any other limitating assumption. The input data have to satisfy certain quality constraints like amplitude and almost zero phase noise—ghosts, reverberations, long period multiples, and diffracted waves should be rejected by traditional processing. The proposed algorithm minimizes a functional of the difference between the spectra of trace and reflectivity in the frequency domain. The estimation of reflection coefficients together with the consistent “wavelet’ is reached iteratively with a multidimensional Newton-Raphson technique. The residual error trace shows the behavior of the process. Several advantages are then obtainable from these reflection coefficients, like conversion to interval velocities with an optimum calibration either to the well logs or to the velocity analysis curves. The procedure can be applied for detailed stratigraphic interpretations or to improve the resolution of a conventional velocity analysis.

12 citations


Journal ArticleDOI
TL;DR: A forward solution for the reflection response of a parallel stratified lossless medium characterized by discrete reflection coefficients and unequal layer delays, for a normally incident pressure source signal, is presented in this article.
Abstract: A forward solution for the reflection response of a parallel stratified lossless medium characterized by discrete reflection coefficients and unequal layer delays, for a normally incident pressure source signal, is presented. The notation, which details the reflection history of each wavelet in a response record, facilitates systematic enumeration of all terms in the reflection impulse response model, the determination of compact closed form expressions for amplitudes and delays of multiply reflected wavelets, and the aggregation of dynamic analog groups. An equal delay time constraint on layer thicknesses leads then to the reflection sequence or synthetic seismogram structure as an infinite sum of wavelets by their order of reflection.

9 citations


Journal ArticleDOI
M. Aftab Alam, Charles Sicking1
TL;DR: In this article, the orthonormal lattice and maximum entropy algorithms are compared to the results of deconvolution using the maximum entropy and Levinson algorithms, and the results show that the direct methods have fewer windowing problems, higher resolving power, and are more suited for use in a time-varying manner than the indirect method.
Abstract: The Gram‐Schmidt orthogonalization procedure is simplified under the assumption of stationarity and implemented to perform recursive predictive deconvolution. This process is called the orthonormal lattice filter. The results of deconvolution by this method are compared to the results of deconvolution using the maximum entropy and Levinson algorithms. The orthonormal lattice and maximum entropy algorithms are direct methods and estimate the filter from the data, while the Levinson algorithm is an indirect method and estimates the filter from the autocorrelation function. Results from synthetic and real data show that the direct methods have fewer windowing problems, higher resolving power, and are more suited for use in a time‐varying manner than the indirect method. Results from real data show that optimally weighted space averaging and zero‐phase band‐pass filtering after time‐varying direct deconvolution produces highly resolved and spatially coherent seismic sections.

9 citations


Journal ArticleDOI
TL;DR: In this paper, the amplitude spectrum of signal and noise at a given time-depth provides one with a quantitative estimate of the best achievable resolution of the actual data, and the absorption curve in amplitude spectrum using the recorded source spectrum.
Abstract: Besides geometrical and velocity effects, two global parameters may disturb the comparison between acoustic logs recorded in a borehole and those derived from seismic data: (a) the amplitude spectrum of the basic seismic wavelet, which controls the power of resolution; and (b) its phase characteristics that monitor the resemblance. The latter is of major importance in stratigraphic interpretation of seismic impedance logs (recognition of geologic transitions). The most important causes of distortions can sometimes be determined and compensated. The seismic source signature needs to be recorded, analyzed, and processed to determine the seismic wavelet. Phase distortions due to recording equipment (geophones and laboratory) are very important, and need to be corrected. Trav l into the earth introduces multiples and absorption. Multiples can be dealt with in the classic way with a few precautions. Then an estimation of the amplitude spectrum of signal and noise at a given time-depth provides one with a quantitative estimate of the best achievable resolution of the actual data, and the absorption curve in amplitude spectrum using the recorded source spectrum. A good match between experiments and theory leads to the estimated impulse response of absorption at a given time, and reversely, to an optimized (eventually time-varying) correction operator. Further conventional deconvolution processes should not be used. Nevertheless, when distortions cannot be corrected, and if well data are available, a deconvolution program controlling both amplitude and phase can be used interactively to achieve an equivalent result. End_of_Article - Last_Page 980------------

2 citations