scispace - formally typeset
Search or ask a question

Showing papers on "Wavelet published in 1986"


Book
30 Apr 1986
TL;DR: In this paper, the authors present the flow of events of the general signal processing system and apply some manipulations to enhance the relevant information present in the signal, such as Simple, Optimal, and Adaptive Filtering.
Abstract: First published in 1986: The presentation of the material in the book follows the flow of events of the general signal processing system. After the signal has been acquired, some manipulations are applied in order to enhance the relevant information present in the signal. Simple, Optimal, and adaptive filtering are examples of such manipulations. The detection of wavelets is of importance in biomedical signals; they can be detected from the enhanced signal by several methods. The signal very often contains redundancies. When effective storing, transmission, or automatic classification are required, these redundancies have to be extracted.

279 citations


Journal ArticleDOI
TL;DR: The Nth-root stack as discussed by the authors is used in the processing of seismic refraction and teleseismic array data, where the average of the nth root of each observation is raised to the Nth power, with the signs of the observations and average maintained.
Abstract: Multichannel geophysical data are usually stacked by calculating the average of the observations on all channels. In the Nth-root stack, the average of the Nth root of each observation is raised to the Nth power, with the signs of the observations and average maintained. When N = 1, the process is identical to conventional linear stacking or averaging. Nth-root stacking has been applied in the processing of seismic refraction and teleseismic array data. In some experiments and certain applications it is inferior to linear stacking, but in others it is superior. Although the variance for an Nth-root stack is typically less than for a linear stack, the mean square error is larger, because of signal attenuation. The fractional amount by which the signal is attenuated depends in a complicated way on the number of data channels, the order (N) of the stack, the signal-to-noise ratio, and the noise distribution. Because the signal-to-noise ratio varies across a wavelet, peaking where the signal is greatest and approaching zero at the zero-crossing points, the attenuation of the signal varies across a wavelet, thereby producing signal distortion. The main visual effect of the distortion is a sharpening of the legs of the wavelet. However, the attenuation of the signal is accompanied by a much greater attenuation of the background noise, leading to a significant contrast enhancement. It is this sharpening of the signal, accompanied by the contrast enhancement, that makes the technique powerful in beam-steering applications of array data. For large values of N, the attenuation of the signal with low signal-to-noise ratios ultimately leads to its destruction. Nth-root stacking is therefore particularly powerful in applications where signal sharpening and contrast enhancement are important but signal distortion is not.

88 citations


Journal ArticleDOI
TL;DR: In this article, a heuristic model which relates the statistical characteristics of the measured signal to the mean ultrasonic wavelet and attenuation coefficient in different regions of the sample is investigated, and the losses in the backscattered signal are examined using temporal averaging, correlation, and probability distribution functions of the segmented data.
Abstract: Grain size characterization using ultrasonic backscattered signals is an important problem in nondestructive testing of materials. In this paper, a heuristic model which relates the statistical characteristics of the measured signal to the mean ultrasonic wavelet and attenuation coefficient in different regions of the sample is investigated. The losses in the backscattered signal are examined using temporal averaging, correlation, and probability distribution functions of the segmented data. Furthermore, homomorphic processing is used in a novel application to estimate the mean ultrasonic wavelet (as it propagates through the sample) and the frequency‐dependent attenuation. In the work presented, heat‐treated stainless steel samples with various grain sizes are examined. The processed experimental results support the feasibility of the grain size evaluation techniques presented here using the backscattered grain signal.

71 citations


Journal ArticleDOI
TL;DR: The two nonlinear effects of two-tone suppression and of (2f1-f2)-difference tone creation are measured in a hardware model which consists of 90 sections containing nonlinear feedback loops indicating that the model describes cochlear nonlinear preprocessing to a useful approximation.
Abstract: The two nonlinear effects of two-tone suppression and of (2f1-f2)-difference tone creation are measured in a hardware model which consists of 90 sections containing nonlinear feedback loops. The basic data are the level and phase distributions along the 90 sections produced by single tones in the linear passive system which are almost identical to those produced in the nonlinear active system at high levels. Enhancement is created at medium and low input levels resulting in more strongly peaked level-place patterns. Two-tone suppression is, therefore, described as a "de-enhancement" which is produced by the gain reduction in the saturating nonlinearity of the feedback loop in consequence of increasing input levels (that of the feedback loop in consequence of increasing input levels (that of the suppressor as well!). Characteristics of suppression are given in normalized form. The creation of (2f1-f2)-difference tones is based on the same nonlinear effects. In each section, difference-tone wavelets are created which travel--changing level and phase thereby--to their characteristic place, where they add up to a vector sum corresponding to the audible difference tone. In case of cancellation, the vector sum has to be compensated by an additional tone of the same frequency and amount but opposite phase. Based on this strategy of (2f1-f2)-difference tone development, the relevant relations are measured on the model and averaged either in normalized graphs or in equations in order to offer the possibility to simulate the hardware model on the computer. Psychoacoustically measured cancellation data are compared with data measured using the model. The two data sets agree not only in general but also in many details indicating that the model describes cochlear nonlinear preprocessing to a useful approximation.

58 citations


Journal ArticleDOI
TL;DR: The implementation of element-to-element propagation into the Miller-Geselowitz heart model, so as to automatically generate activation isochrones, is described, utilizing ellipsoidal propagation wavelets to reflect anisotropic propagation in the myocardium.
Abstract: The implementation of element-to-element propagation into the Miller-Geselowitz heart model, so as to automatically generate activation isochrones, is described. This implementation was achieved from initiation sites on the endocardial surface of the model via a Huygens' construction, utilizing ellipsoidal propagation wavelets to reflect anisotropic propagation in the myocardium. Isochrones similar to those specified for normal activation of the original Miller-Geselowitz model were obtained, using propagation velocities derived from published propagation velocities measured in isolated tissue. Futther validation of the new model was sought by simulating the Wolff-Parkinson-White syndrome, in which preexcitation of the ventricles of the heart occurs due to an accessory pathway connecting atria and ventricles, resulting in an initial delta wave in the QRS complex of the electrocardiogram. The approximate site of the accessory pathway may be deduced from the subject's body surface potential map pattern during the delta wave, or from the polarities of the delta wave in the 12-lead electrocardiogram, or again from the orientation of the spatial vector-cardiogram during the delta wave. By specifying eight separate accessory pathway initiation sites, followed 40 ms later by normal activation, the isochrones corresponding to preexcitation were simulated. The body surface potential maps, electrocardiograms, and vectorcar-diograms were calculated using an inhomogeneous torso model.

50 citations


Journal ArticleDOI
TL;DR: A large number of deconvolution procedures have appeared in the literature during the last three decades, including a number of maximum-likelihood deconvolutions (MLD) procedures.
Abstract: A large number of deconvolution procedures have appeared in the literature during the last three decades, including a number of maximum‐likelihood deconvolution (MLD) procedures. The major advantages of the MLD procedures are (1) no assumption is required about the phase of the wavelet (most of the classical deconvolution techniques assume a minimum‐phase wavelet, an assumption that may not be appropriate for many data sets); (2) MLD procedures can resolve closely spaced events (i.e., they are high‐resolution techniques); and (3) they can efficiently handle modeling and measurement errors, as well as backscatter effects (i.e., reflections from small features). A comparative study of six different MLD procedures for estimating the input of a linear, time‐invariant system from measurements, which have been corrupted by additive noise, was made by using a common framework developed from fundamental optimization theory arguments. To date, only the Kormylo and the Chi‐t algorithms can be recommended.

30 citations


Journal ArticleDOI
01 Mar 1986
TL;DR: In this paper, a one-dimensional normal-incidence inversion procedure for reflection seismic data is presented, where a priori knowledge for the unknown parameters, in the form of statistics, is incorporated into a nonuniform layered system, and a maximum a posteriori estimation procedure is used for the estimation of the system's unknown parameters from noisy and band-limited data.
Abstract: In this paper we present a one-dimensional normal-incidence inversion procedure for reflection seismic data. A lossless layered system is considered which is characterized by reflection coefficients and traveltimes. A priori knowledge for the unknown parameters, in the form of statistics, is incorporated into a nonuniform layered system, and a maximum a posteriori estimation procedure is used for the estimation of the system's unknown parameters (i.e., we assume a random reflector model) from noisy and band-limited data. Our solution to the inverse problem includes a downward continuation procedure for estimation of the states of the system. The state sequences are composed of overlapping wavelets. We show that estimation of the unknown parameters of a layer is equivalent to estimation of the amplitude and detection of the time delay of the first wavelet in the upgoing state sequence of the layer. A suboptimal maximum-likelihood deconvolution procedure is employed to perform estimation and detection. The most desirable features of the proposed algorithm are its layer-recursive structure and its ability to process noisy and band-limited data.

17 citations


Journal ArticleDOI
TL;DR: In this article, the authors exploited the synergism of 3D seismic data, wavelet processing, colour display and interactive interpretation in the study of a Gulf of Mexico gas reservoir.
Abstract: Synergism of 3-D seismic data, wavelet processing, colour display and interactive interpretation has been exploited in the study of a Gulf of Mexico gas reservoir. Seismic amplitude has been used as a measure of the proportion of a sand/shale reservoir capable of producing gas. This has led to the mapping of net producible thickness of gas sand. The tuning phenomena resulting from geometric effects alone were studied in detail, and tuning curves of various levels of sophistication were used as the basis for amplitude editing. Statistical tuning curves were derived by interactive cross-plotting and deterministic curves by wavelet extraction. Multiple wavelet side lobes cause multiple maxima in the tuning curve. Depositional effects and intrareservoir communication have also been studied by interactive cross-plotting.

14 citations


Journal ArticleDOI
TL;DR: In this article, the stability of one-dimensional exact inverse methods is considered and it is shown that the results of the inversion procedure are very sensitive to the characteristics of the source wavelet chosen for inversion.
Abstract: It is well known that despite the fact that one-dimensional inverse methods are well developed there are still difficulties when these methods are applied to real seismic data. There are several reasons why these methods usually fail. Real seismic data are corrupted by noise. The data typically do not contain the low and high frequencies. This can dramatically degrade the reconstruction of the acoustic impedance as well as affect the resolution. The stability of one-dimensional exact methods is considered. Despite the fact that all these methods require the source time function to be a Dirac delta-function, the author examines the possibility of applying these methods to an arbitrary source function. It is shown that the results of the inversion procedure are very sensitive to the characteristics of the source wavelet chosen for inversion. In practice (in exploration geophysics, for example) it is almost impossible to measure the source wavelet precisely. Therefore, it appears very important to known in which cases small errors in the source wavelet will not cause large errors in a solution. (In other words, in which cases the problem will be well-posed even though the source time function is not impulsive.) Based on the analysis presented the author draws the conclusion that small uncertainties inherent in the estimate of the source spectrum can lead to severe instabilities in the low-frequency portion of its spectrum whereas small uncertainties in the high-frequency portion of the source spectrum hardly affect the recoverability of the acoustic impedance.

14 citations


Journal ArticleDOI
TL;DR: In this paper, a surface-consistent wavelet solution (common source, receiver, and offset) is used in preference to a single-channel operation to improve the robustness of deconvolution.
Abstract: The presence of random additive noise is the most important degrading factor in the deconvolution of seismic data. Noise-induced distortion of signal phase and amplitude produces severe stack attenuation, makes poststack recovery difficult with spectral enhancement techniques, and leaves the stratigraphic imprint unclear. The random noise component in the data is estimated from trace segments before the first arrivals and at the bottom of the record beyond seismic basement. An autocorrelation of this noise is used to adjust the signal autocorrelation prior to Wiener-Levinson deconvolution filter design. To improve the robustness of the technique, an iterative surface-consistent wavelet solution (common source, receiver, and offset) is used in preference to a single-channel operation. Use of this deconvolution technique is shown by synthetic and case examples to result in correct phase alignment, enhanced stacking fidelity, and extended signal bandwidth even on very noisy data. The improvement, coupled with sensible handling of coherent noise energy, is crucial for the interpretation of subtle stratigraphic plays in many areas.

10 citations



Journal ArticleDOI
TL;DR: In this paper, the performance of two-sided Wiener spiking and shaping filters is compared with that of the zero-lag (one-sided) operators which can be evaluated from the reflected arrival sequence alone by assuming a minimum phase source wavelet.
Abstract: Wiener ‘spiking’ deconvolution of seismic traces in the absence of a known source wavelet relies upon the use of digital filters, which are optimum in a least-squares error sense only if the wavelet to be deconvolved is minimum phase In the marine environment in particular this condition is frequently violated, since bubble pulse oscillations result in source signatures which deviate significantly from minimum phase The degree to which the deconvolution is impaired by such violation is generally difficult to assess, since without a measured source signature there is no optimally deconvolved trace with which the spiked trace may be compared A recently developed near-bottom seismic profiler used in conjunction with a surface air gun source produces traces which contain the far-field source signature as the first arrival Knowledge of this characteristic wavelet permits the design of two-sided Wiener spiking and shaping filters which can be used to accurately deconvolve the remainder of the trace In this paper the performance of such optimum-lag filters is compared with that of the zero-lag (one-sided) operators which can be evaluated from the reflected arrival sequence alone by assuming a minimum phase source wavelet Results indicate that the use of zero-lag operators on traces containing non-minimum phase wavelets introduces significant quantities of noise energy into the seismic record Signal to noise ratios may however be preserved or even increased during deconvolution by the use of optimum-lag spiking or shaping filters A debubbling technique involving matched filtering of the trace with the source wavelet followed by optimum-lag Wiener deconvolution did not give a higher quality result than can be obtained simply by the application of a suitably chosen Wiener shaping filter However, cross correlation of an optimum-lag spike filtered trace with the known ‘actual output’ of the filter when presented with the source signature is found to enhance signal-to-noise ratio whilst maintaining improved resolution

Journal ArticleDOI
TL;DR: In this article, the accuracy of the phase spectra from the signatures, impulse responses and other wavelets observed in seismic data is questioned with special reference to Texas Instruments DFS IV and DFS V recording filters.
Abstract: Analysis of the phase spectra from the signatures, impulse responses and other wavelets observed in seismic data leads to the construction of equivalent minimum-phase functions. The accuracy of such computations using digitally sampled data is questioned with special reference to Texas Instruments DFS IV and DFS V recording filters. Results vary with the lengths and sample rates of the time functions, and further errors may be introduced when implementing the Hilbert transform. Such problems are related to poor resolution in the low amplitude areas of the spectrum. Techniques for correction are described. With appropriate shaping a reasonably accurate phase spectrum may be computed for the minimum-phase function. The generation of minimum-phase wavelets within the processing sequence is briefly discussed.

Proceedings ArticleDOI
01 Apr 1986
TL;DR: The performance of the complex cepstrum as used to recover wavelets in the presence of distorted echoes and noise is studied using simulated signals.
Abstract: The performance of the complex cepstrum as used to recover wavelets in the presence of distorted echoes and noise is studied using simulated signals. The distortions and noise are chosen to be representative of practical applications encountered in acoustic source location. On the whole the complex cepstrum is seen to perform well for the situations considered.

Journal ArticleDOI
Jose Eduardo Thomas1
TL;DR: In this article, the authors propose a data processing technique called deconvolution of the source wavelet from the field seismic traces, which can be applied to improve the resolution of the seismic trace.
Abstract: One of the fundamental problems in exploration seismology is to obtain a seismic record which has both high resolution and high ratio of signal to noise. If the seismic trace has a fair signal‐to‐noise (S/N) ratio, then subsequent data processing can be applied to improve the resolution. Basically, this data‐processing technique is the deconvolution of the source wavelet from the field seismic traces.

Journal ArticleDOI
TL;DR: A general purpose time domain algorithm is provided for the calculation of down-hole zero offset synthetic seismograms, developed from a solution of Robinson's model with appropriate boundary and initial conditions, which provides information about variations in the source impulse due to transmission losses and also the properties of multiple reflections.
Abstract: A general purpose time domain algorithm is provided for the calculation of down-hole zero offset synthetic seismograms. Unlike previous vertical synthetic seismograms, the algorithm allows both source and receiver to be arbitrarily located in a borehole, and can therefore, be used to model the results obtained by various recently developed acoustic well log or vertical seismic data gathering techniques, including the Yo?Yo arrangement. As it is developed from a solution of Robinson's model with appropriate boundary and initial conditions, the algorithm provides information about variations in the source impulse due to transmission losses and also the properties of multiple reflections. A program based On the ARMA and LAT-TICE filter structures which improve the efficiency of the calculation. The program allows the use of a flexible source wavelet which can easily be designed by the user. In addition, because its the program can be used on personal computers.

Journal ArticleDOI
TL;DR: In this paper, the authors present a derivation of the formula for filtering a transmitted SH wavelet by short-period multiples in a spherically layered Earth using a continuous, rather than a discrete formulation and regard the impedance and the velocity as random variables.
Abstract: We present a derivation of the formula for filtering a transmitted SH wavelet by shortperiod multiples in a spherically layered Earth. We use a continuous, rather than a discrete formulation and regard the impedance and the velocity as random variables. The mean shear displacement represents the propagating wavelet as modified by short-period multiples. Standard procedures and approximations Lead to the dispersion relation of the mean displacement. To describe the stratigraphic filtering we introduce a complex quantityF such that a wavelet which has travelled a time ΔT is modified by the filter exp {iωFΔT}. The impact of the higher angular harmonic modes is shown to produce a relative enhancement of those modes over the low angular harmonic modes due to fluctuations in the shear-wave propagation velocity. Numerical estimates indicate that the sizes of the apparent attenuation of the mean field and the time delay introduced by the short-period multiples sit squarely in the regime where they produce a nonnegligible distortion of the SH modes of propagation in both phase and amplitude.

Proceedings ArticleDOI
TL;DR: In this article, an extension of the theory of source wavelet estimation is proposed, based on extrapolation of the wave field measured at depth, upward to the free surface, which results in a wavelet which generally includes ghosts and can be used for source signature deconvolution and deghosting.
Abstract: A new deterministic technique for wavelet estimation and deconvolution of seismic traces was recently introduced. This impedance‐type technique was developed for a marine environment where both the source and the receivers are located inside a homogeneous layer of water. In this work, an extension of the theory of source wavelet estimation is proposed. As in previous publications, this method is based on extrapolation of the wave field measured at depth, upward to the free surface. The extrapolation is performed by using the finite‐difference approximation to the full inhomogeneous wave equation. The extrapolation results in a wavelet which generally includes ghosts and can be used for source signature deconvolution and deghosting. The method needs two closely spaced receivers and is applicable for arbitrary locations of the source and the receivers in one‐dimensional multilayered models, provided the source is above the receivers; furthermore, it can be applied to both marine and land data. Application o...

Proceedings ArticleDOI
01 Apr 1986
TL;DR: This paper addresses the problem of selection and identification of non-zero coefficients in the MA models (pulse position and amplitude) globally in the Fourier transform domain using a (complex) Pisarenko procedure, instead of sequentially.
Abstract: The problem of determining sparse MA models has received much attention in recent years and is of fundamental importance in various application areas such as speech (multi-pulse excitation) or seismic data (wavelet time of arrival) This paper addresses the problem of selection and identification of non-zero coefficients in the MA models (pulse position and amplitude) The selection is done globally in the Fourier transform domain using a (complex) Pisarenko procedure, instead of sequentially Moreover, the pulses being frequently placed in contiguous locations as a short solvo, a new MA identification method is proposed for this special case This method only uses the AR model coefficients and the prediction residual as entries