scispace - formally typeset
Search or ask a question

Showing papers in "Geophysics in 2001"


Journal ArticleDOI
TL;DR: In this article, a nonlinear conjugate gradients (NLCG) algorithm was proposed to minimize an objective function that penalizes data residuals and second spatial derivatives of resistivity.
Abstract: We investigate a new algorithm for computing regularized solutions of the 2-D magnetotelluric inverse problem. The algorithm employs a nonlinear conjugate gradients (NLCG) scheme to minimize an objective function that penalizes data residuals and second spatial derivatives of resistivity. We compare this algorithm theoretically and numerically to two previous algorithms for constructing such “minimum‐structure” models: the Gauss‐Newton method, which solves a sequence of linearized inverse problems and has been the standard approach to nonlinear inversion in geophysics, and an algorithm due to Mackie and Madden, which solves a sequence of linearized inverse problems incompletely using a (linear) conjugate gradients technique. Numerical experiments involving synthetic and field data indicate that the two algorithms based on conjugate gradients (NLCG and Mackie‐Madden) are more efficient than the Gauss‐Newton algorithm in terms of both computer memory requirements and CPU time needed to find accurate solutio...

1,185 citations


Journal ArticleDOI
TL;DR: In this article, a perfectly matched absorbing layer model for the velocity-stress formulation of elastodynamics is proposed, which decomposes each component of the unknown into two auxiliary components: a component orthogonal to the boundary and a component parallel to it.
Abstract: We present and analyze a perfectly matched, absorbing layer model for the velocity-stress formulation of elastodynamics. The principal idea of this method consists of introducing an absorbing layer in which we decompose each component of the unknown into two auxiliary components: a component orthogonal to the boundary and a component parallel to it. A system of equations governing these new unknowns then is constructed. A damping term finally is introduced for the component orthogonal to the boundary. This layer model has the property of generating no reflection at the interface between the free medium and the artificial absorbing medium. In practice, both the boundary condition introduced at the outer boundary of the layer and the dispersion resulting from the numerical scheme produce a small reflection which can be controlled even with very thin layers. As we will show with several experiments, this model gives very satisfactory results; namely, the reflection coefficient, even in the case of heterogeneous, anisotropic media, is about 1% for a layer thickness of five space discretization steps.

739 citations


Journal ArticleDOI
TL;DR: A new method for predicting well‐log properties from seismic data, which is a linear or nonlinear transform between a subset of the attributes and the target log values, is described.
Abstract: We describe a new method for predicting well‐log properties from seismic data. The analysis data consist of a series of target logs from wells which tie a 3-D seismic volume. The target logs theoretically may be of any type; however, the greatest success to date has been in predicting porosity logs. From the 3-D seismic volume a series of sample‐based attributes is calculated. The objective is to derive a multiattribute transform, which is a linear or nonlinear transform between a subset of the attributes and the target log values. The selected subset is determined by a process of forward stepwise regression, which derives increasingly larger subsets of attributes. An extension of conventional crossplotting involves the use of a convolutional operator to resolve frequency differences between the target logs and the seismic data. In the linear mode, the transform consists of a series of weights derived by least‐squares minimization. In the nonlinear mode, a neural network is trained, using the selected att...

484 citations


Journal ArticleDOI
N. Ross Hill1
TL;DR: In this paper, the authors proposed a beam superposition based beam migration method, which is an extension of Kirchhoff migration that overcomes many of the problems caused by multipathing.
Abstract: Kirchhoff migration is the most popular method of three‐dimensional prestack depth migration because of its flexibility and efficiency. Its effectiveness can become limited, however, when complex velocity structure causes multipathing of seismic energy. An alternative is Gaussian beam migration, which is an extension of Kirchhoff migration that overcomes many of the problems caused by multipathing. Unlike first‐arrival and most‐energetic‐arrival methods, which retain only one traveltime, this alternative method retains most arrivals by the superposition of Gaussian beams. This paper presents a prestack Gaussian beam migration method that operates on common‐offset gathers. The method is efficient because the computation of beam superposition isolates summations that do not depend on the seismic data and evaluates these integrals by considering their saddle points. Gaussian beam migration of the two‐dimensional Marmousi test data set demonstrates the method’s effectiveness for structural imaging in a case w...

444 citations


Journal ArticleDOI
TL;DR: In this article, explicit expressions for computing saturation and pressure related changes from time-lapse seismic data have been derived and tested on a real-time seismic data set, where the necessary input is near and far offset stacks for the baseline seismic survey and the repeat survey.
Abstract: Explicit expressions for computing saturation‐ and pressure‐related changes from time‐lapse seismic data have been derived and tested on a real time‐lapse seismic data set. Necessary input is near‐and far‐offset stacks for the baseline seismic survey and the repeat survey. The method has been tested successfully in a segment where pressure measurements in two wells verify a pore‐pressure increase of 5 to 6 MPa between the baseline survey and the monitor survey. Estimated pressure changes using the proposed relationships fit very well with observations. Between the baseline and monitor seismic surveys, 27% of the estimated recoverable hydrocarbon reserves were produced from this segment. The estimated saturation changes also agree well with observed changes, apart from some areas in the water zone that are mapped as being exposed to saturation changes (which is unlikely). Saturation changes in other segments close to the original oil‐water contact and the top reservoir interface are also estimated and conf...

423 citations


Journal ArticleDOI
TL;DR: An extension to Groom-Bailey decomposition is proposed in which a global minimum is sought to determine the most appropriate strike direction and telluric distortion parameters for a range of frequencies and a set of sites.
Abstract: Accurate interpretation of magnetotelluric data requires an understanding of the directionality and dimensionality inherent in the data, and valid implementation of an appropriate method for removing the effects of shallow, small-scale galvanic scatterers on the data to yield responses representative of regionalscale structures. The galvanic distortion analysis approach advocated by Groom and Bailey has become the most adopted method, rightly so given that the approach decomposes the magnetotelluric impedance tensor into determinable and indeterminable parts, and tests statistically the validity of the galvanic distortion assumption. As proposed by Groom and Bailey, one must determine the appropriate frequency-independent telluric distortion parameters and geoelectric strike by fitting the seven-parameter model on a frequencyby-frequency and site-by-site basis independently. Although this approach has the attraction that one gains a more intimate understanding of the data set, it is rather time-consuming and requires repetitive application. We propose an extension to Groom-Bailey decomposition in which a global minimum is sought to determine the most appropriate strike direction and telluric distortion parameters for a range of frequencies and a set of sites. Also, we show how an analytically-derived approximate Hessian of the objective function can reduce the required computing time. We illustrate application of the analysis to two synthetic data sets and to real data. Finally, we show how the analysis can be extended to cover the case of frequency-dependent distortion caused by the magnetic effects of the galvanic charges.

414 citations


Journal ArticleDOI
TL;DR: The use of 4-D seismic has grown exponentially over the past decade and is expected to continue to do so as mentioned in this paper, and there are currently about 75 active projects worldwide, and more than 100 cumulative projects.
Abstract: Time‐lapse seismic reservoir monitoring has advanced rapidly over the past decade. There are currently about 75 active projects worldwide, and more than 100 cumulative projects in the past decade or so. The present total annual expenditures on 4-D seismic projects are on the order of $50–100 million US. This currently represents a much smaller market than 3-D seismic, but the use of 4-D seismic has grown exponentially over the past decade and is expected to continue to do so.

362 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examined the use of two repeatability metrics in assessing the similarity of two sets of repeat 2D lines acquired with a marine point receiver system, and found that in one repeat set, no streamer positioning control was in use; in the other repeat set the positioning differences were minimized using the streamer position control.
Abstract: Time-lapse data are increasingly used to study production-induced changes in the seismic response of a reservoir as part of a reservoir management program. However, residual differences in the repeated time-lapse data that are independent of changes in the subsurface geology impact the effectiveness of the method. These differences depend on many factors such as signature control, streamer positioning, and recording fidelity differences between the two surveys. Such factors may be regarded as contributing to the time-lapse noise and any effort designed to improve the time-lapse signal-to-noise ratio must address the quantifiable repeatability of the seismic survey. Although there are counter-examples (for example, Johnston et al., 2000), minimization of the acquisition footprint and repeatability of the geometry to equalize residual footprints in both surveys are considered important. This has been a key objective in the development of point receiver acquisition systems. In this study, which develops the analysis from Kragh and Christie (2001), we examine the use of two repeatability metrics in assessing the similarity of two sets of repeat 2D lines acquired with a marine point receiver system. In one repeat set, no streamer positioning control was in use; in the other repeat set, positioning differences were minimized using the streamer positioning control. There does not appear to be a standard measure of repeatability, defined as a metric, to quantify the likeness of two traces. One commonly used metric is the normalized rms difference of the two traces, at and bt within a given window t1-t2: the rms of the difference divided by the average rms of the inputs, and expressed as a percentage: \batchmode \documentclass[fleqn,10pt,legalpaper]{article} \usepackage{amssymb} \usepackage{amsfonts} \usepackage{amsmath} \pagestyle{empty} \begin{document} \[NRMS\ =\ \frac{200\ {\times}\ RMS(a\_{t}\ {-}\ b\_{t})}{RMS(a\_{t})\ {+}\ RMS(b\_{t})}\] \end{document} where the rms operator is defined as: \batchmode \documentclass[fleqn,10pt,legalpaper]{article} \usepackage{amssymb} \usepackage{amsfonts} \usepackage{amsmath} \pagestyle{empty} \begin{document} \[RMS(x\_{t})\ =\ \sqrt{\frac{{\Sigma}^{t\_{2}}\_{t\_{1}}(x_{t})^{2}}{N}}\] \end{document} and N is the number of samples in the interval t1-t2. The values of nrms are not intuitive and …

346 citations


Journal ArticleDOI
TL;DR: In this article, the common reflection surface (CRS) stack provides a zero-offset simulation from seismic multicoverage reflection data, which can be used to derive the 2D macrovelocity model.
Abstract: The common‐reflection‐surface stack provides a zero‐offset simulation from seismic multicoverage reflection data. Whereas conventional reflection imaging methods (e.g. the NMO/dip moveout/stack or prestack migration) require a sufficiently accurate macrovelocity model to yield appropriate results, the common‐reflection‐surface (CRS) stack does not depend on a macrovelocity model. We apply the CRS stack to a 2-D synthetic seismic multicoverage dataset. We show that it not only provides a high‐quality simulated zero‐offset section but also three important kinematic wavefield attribute sections, which can be used to derive the 2-D macrovelocity model. We compare the multicoverage‐data‐derived attributes with the model‐derived attributes computed by forward modeling. We thus confirm the validity of the theory and of the data‐derived attributes. For 2-D acquisition, the CRS stack leads to a stacking surface depending on three search parameters. The optimum stacking surface needs to be determined for each point...

313 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe the mechanics of seismic migration, as well as some of the problems related to it, such as algorithmic accuracy and efficiency, and velocity estimation.
Abstract: Historically, seismic migration has been the practice (science, technology, and craft) of collapsing diffraction events on unmigrated records to points, thereby moving (“migrating”) reflection events to their proper locations, creating a true image of structures within the earth. Over the years, the scope of migration has broadened. What began as a structural imaging tool is evolving into a tool for velocity estimation and attribute analysis, making detailed use of the amplitude and phase information in the migrated image. With its expanded scope, migration has moved from the final step of the seismic acquisition and processing flow to a more central one, with links to both the processes preceding and following it. In this paper, we describe the mechanics of migration (the algorithms) as well as some of the problems related to it, such as algorithmic accuracy and efficiency, and velocity estimation. We also describe its relationship with other processes, such as seismic modeling. Our approach is tutorial; we avoid presenting the finest details of either the migration algorithms themselves or the problems to which migration is applied. Rather, we focus on presenting the problems themselves, in the hope that most geophysicists will be able to gain an appreciation of where this imaging method fits in the larger problem of searching for hydrocarbons.

249 citations


Journal ArticleDOI
TL;DR: This tutorial is about the Bayesian approach to the solution of the ubiquitous inverse problem and it may not appeal to, let alone be agreed to, by all.
Abstract: It is unclear whether one can (or should) write a tutorial about Bayes. It is a little like writing a tutorial about the sense of humor. However, this tutorial is about the Bayesian approach to the solution of the ubiquitous inverse problem. Inasmuch as it is a tutorial, it has its own special ingredients. The first is that it is an overview; details are omitted for the sake of the grand picture. In fractal language, it is the progenitor of the complex pattern. As such, it is a vision of the whole. The second is that it does, of necessity, assume some ill-defined knowledge on the part of the reader. Finally, this tutorial presents our view. It may not appeal to, let alone be agreed to, by all.

Journal ArticleDOI
Zhijing Wang1
TL;DR: In the past 50 years or so, tremendous progress has been made in studying physical properties of rocks and minerals in relation to seismic exploration and earthquake seismology as discussed by the authors, and many theories and experimental results have played important roles in advancing earth sciences and exploration technologies.
Abstract: During the past 50 years or so, tremendous progress has been made in studying physical properties of rocks and minerals in relation to seismic exploration and earthquake seismology. During this period, many theories have been developed and many experiments have been carried out. Some of these theories and experimental results have played important roles in advancing earth sciences and exploration technologies. This tutorial paper attempts to summarize some of these results.

Journal ArticleDOI
TL;DR: In this article, statistical rock physics techniques combined with seismic information can be used to classify reservoir lithologies and pore fluids, and the methods were applied to a North Sea turbidite system.
Abstract: Reliably predicting lithologic and saturation heterogeneities is one of the key problems in reservoir characterization. In this study, we show how statistical rock physics techniques combined with seismic information can be used to classify reservoir lithologies and pore fluids. One of the innovations was to use a seismic impedance attribute (related to the VP/VS ratio) that incorporates far‐offset data, but at the same time can be practically obtained using normal incidence inversion algorithms. The methods were applied to a North Sea turbidite system. We incorporated well log measurements with calibration from core data to estimate the near‐offset and far‐offset reflectivity and impedance attributes. Multivariate probability distributions were estimated from the data to identify the attribute clusters and their separability for different facies and fluid saturations. A training data was set up using Monte Carlo simulations based on the well log—derived probability distributions. Fluid substitution by Ga...

Journal ArticleDOI
TL;DR: In this paper, the authors developed a suite of new seismic attributes that reduce the input 20-60 running window spectral components down to a workable subset that allows them to quickly map thin-bed tuning effects in three dimensions.
Abstract: Running window seismic spectral decomposition has proven to be a very powerful tool in analyzing difficult‐to‐delineate thin‐bed tuning effects associated with variable‐thickness sand channels, fans, and bars along an interpreted seismic horizon or time slice. Unfortunately, direct application of spectral decomposition to a large 3‐D data set can result in a rather unwieldy 4‐D cube of data. We develop a suite of new seismic attributes that reduces the input 20–60 running window spectral components down to a workable subset that allows us to quickly map thin‐bed tuning effects in three dimensions. We demonstrate the effectiveness of these new attributes by applying them to a large spec survey from the Gulf of Mexico. These two thin‐bed seismic attributes provide a fast, economic tool that, when coupled with other attributes such as seismic coherence and when interpreted within the framework of geomorphology and sequence stratigraphy, can help us quickly evaluate large 3‐D seismic surveys. Ironically, in a...

Journal ArticleDOI
TL;DR: It is shown that CIGs calculated by common‐shot or common‐offset migration can be strongly affected by artifacts, even when a correct velocity model is used for the migration, and a novel strategy is proposed: compute Cigs versus the diffracting/reflecting angle.
Abstract: Complex velocity models characterized by strong lateral variations are certainly a great motivation, but also a great challenge, for depth imaging. In this context, some unexpected results can occur when using depth imaging algorithms. In general, after a common shot or common offset migration, the resulting depth images are sorted into common‐image gathers (CIG), for further processing such as migration‐based velocity analysis or amplitude‐variation‐with‐offset analysis. In this paper, we show that CIGs calculated by common‐shot or common‐offset migration can be strongly affected by artifacts, even when a correct velocity model is used for the migration. The CIGs are simply not flat, due to unexpected curved events (kinematic artifacts) and strong lateral variations of the amplitude (dynamic artifacts). Kinematic artifacts do not depend on the migration algorithm provided it can take into account lateral variations of the velocity model. This can be observed when migrating the 2‐D Marmousi dataset either with a wave‐equation migration or with a multivalued Kirchhoff migration/inversion. On the contrary, dynamic artifacts are specific to multi‐arrival ray‐based migration/inversion. This approach, which should provide a quantitative estimation of the reflectivity of the model, provides in this context dramatic results. In this paper, we propose an analysis of these artifacts through the study of the ray‐based migration/inversion operator. The artifacts appear when migrating a single‐fold subdata set with multivalued ray fields. They are due to the ambiguous focusing of individual reflected events at different locations in the image. No information is a priori available in the single‐fold data set for selecting the focusing position, while migration of multifold data would provide this information and remove the artifacts by the stack of the CIGs. Analysis of the migration/inversion operator provides a physical condition, the imaging condition, for insuring artifact free CIGs. The specific cases of common‐shot and common‐offset single‐fold gathers are studied. It appears clearly that the imaging condition generally breaks down in complex velocity models for both these configurations. For artifact free CIGs, we propose a novel strategy: compute CIGs versus the diffracting/reflecting angle. Working in the angle domain seems the natural way for unfolding multivalued ray fields, and it can be demonstrated theoretically and practically that common‐angle imaging satisfies the imaging condition in the great majority of cases. Practically, the sorting into angle gathers can not be done a priori over the data set, but is done in the inner depth migration loop. Depth‐migrated images are obtained for each angle range. A canonical example is used for illustrating the theoretical derivations. Finally, an application to the Marmousi model is presented, demonstrating the relevance of the approach.

Journal ArticleDOI
TL;DR: In this article, the authors present a comprehensive data set of elastic properties of solid clays that commonly occur in, or are related to, petroleum reservoirs, using the weighted Hashin-Shtrikman average.
Abstract: Clay minerals are perhaps the most abundant materials in the earth’s upper crust. As such, their elastic properties are extremely important in seismic exploration, seismic reservoir characterization, and sonic‐log interpretation. Because little exists in the literature on elastic properties of clays, we have designed a method of measuring effective elastic properties of solid clays (clays without pores). In this method, clay minerals are mixed with a material with known elastic properties to make composite samples. Elastic properties of these clay minerals are then inverted from the measured elastic properties of the composite samples using the weighted Hashin‐Shtrikman average. Using this method, we have measured 66 samples of 16 types of clays. In this paper, we present a comprehensive data set of elastic properties of solid clays that commonly occur in, or are related to, petroleum reservoirs. Although uncertainties (up to 10%) exist, the data set reported here is by far the most comprehensive set of e...

Journal ArticleDOI
TL;DR: In this paper, the authors discuss Bayesian and frequentist methodologies that can be used to incorporate information into inverse calculations in particular, showing that apparently conservative Bayesian choices such as representing interval constraints by uniform probabilities may lead to artificially small uncertainties.
Abstract: Solving any inverse problem requires understanding the uncertainties in the data to know what it means to fit the data We also need methods to incorporate data‐independent prior information to eliminate unreasonable models that fit the data Both of these issues involve subtle choices that may significantly influence the results of inverse calculations The specification of prior information is especially controversial How does one quantify information? What does it mean to know something about a parameter a priori? In this tutorial we discuss Bayesian and frequentist methodologies that can be used to incorporate information into inverse calculations In particular we show that apparently conservative Bayesian choices, such as representing interval constraints by uniform probabilities (as is commonly done when using genetic algorithms, for example) may lead to artificially small uncertainties We also describe tools from statistical decision theory that can be used to characterize the performance of inv

Journal ArticleDOI
TL;DR: In this article, a method was developed to extend the perfectly matched layer to simulate seismic wave propagation in poroelastic media, where a nonphysical material is used at the computational edge of a finite-difference algorithm as an ABC to truncate unbounded media.
Abstract: The perfectly matched layer (PML) was first introduced by Berenger as a material absorbing boundary condition (ABC) for electromagnetic waves. In this paper, a method is developed to extend the perfectly matched layer to simulating seismic wave propagation in poroelastic media. This nonphysical material is used at the computational edge of a finite-difference algorithm as an ABC to truncate unbounded media. The incorporation of PML in Biot's equations is different from other PML applications in that an additional term involving convolution between displacement and a loss coefficient in the PML region is required. Numerical results show that the PML ABC attenuates the outgoing waves effectively.

Journal ArticleDOI
TL;DR: In this paper, a series of field experiments showing the transient electric fields generated by a seismic excitation of the subsurface were conducted, and it was shown that the electric field accompanying the compressional waves is approximately proportional to the grain acceleration.
Abstract: We present a series of field experiments showing the transient electric fields generated by a seismic excitation of the subsurface. After removing the powerline noise by adaptive filtering, the most prominent feature of the seismoelectric recordings is the presence of electric signals very similar to conventional seismic recordings. In one instance, we identified small-amplitude precursory electromagnetic disturbances showing a polarity reversal on either side of the shotpoint. Concentrating on the dominant seismoelectric effect, we theoretically show that the electric field accompanying the compressional waves is approximately proportional to the grain acceleration. We also demonstrate that the magnetic field moving along with shear waves is roughly proportional to the grain velocity. These relationships hold true as long as the displacement currents are much smaller than the conduction currents (diffusive regime), which is normally the case in the low-frequency range used in seismic prospecting. Furthermore, the analytical transfer functions thus obtained indicate that the electric field is mainly sensitive to the salt concentration and dielectric constant of the fluid, whereas the magnetic field principally depends on the shear modulus of the framework of grains and on the fluid's viscosity and dielectric constant. Both transfer functions are essentially independent of the permeability. Our results suggest that the simultaneous recording of seismic, electric, and magnetic wavefields can be useful for characterizing porous layers at two different levels of investigation: near the receivers and at greater depth.

Journal ArticleDOI
TL;DR: A new, wave‐equation based method for eliminating the effect of the free surface from marine seismic data without destroying primary amplitudes and without any knowledge of the subsurface.
Abstract: This paper presents a new, wave‐equation based method for eliminating the effect of the free surface from marine seismic data without destroying primary amplitudes and without any knowledge of the subsurface. Compared with previously published methods which require an estimate of the source wavelet, the present method has the following characteristics: it does not require any information about the marine source array and its signature, it does not rely on removal of the direct wave from the data, and it does not require any explicit deghosting. Moreover, the effect of the source signature is removed from the data in the multiple elimination process by deterministic signature deconvolution, replacing the original source signature radiated from the marine source array with any desired wavelet (within the data frequency‐band) radiated from a monopole point source. The fundamental constraint of the new method is that the vertical derivative of the pressure or the vertical component of the particle velocity is...

Journal ArticleDOI
TL;DR: In this paper, a predrill estimate of pore pressure can be obtained from seismic velocities using a velocity-to-pore-pressure transform, but this method requires a large number of calibration measurements.
Abstract: 1A predrill estimate of pore pressure can be obtained from seismic velocities using a velocity‐to–pore‐pressure transform, but the seismic velocities need to be derived using methods having sufficient resolution for well planning purposes. For a deepwater Gulf of Mexico example, significant differences are found between the velocity field obtained using reflection tomography and that obtained using a conventional method based on the Dix equation. These lead to significant differences in the predicted pore pressure. Parameters in the velocity‐to–pore‐pressure transform are estimated using seismic interval velocities and pressure data from nearby calibration wells. The uncertainty in the pore pressure prediction is analyzed by examining the spread in the predicted pore pressure obtained using parameter combinations which sample the region of parameter space consistent with the available well data. If calibration wells are not available, the ideas proposed in this paper can be used with measurements made whi...

Journal ArticleDOI
TL;DR: Extended Euler deconvolution as discussed by the authors combines the Euler homogeneity relation and deconvolutions to obtain a more complete source parameter estimation that allows the determination of susceptibility contrast and dip in the cases of contact and thin-sheet sources.
Abstract: The Euler homogeneity relation expresses how a homogeneous function transforms under scaling. When implemented, it helps to determine source location for particular potential field anomalies. In this paper, we introduce an additional relation that expresses the transformation of homogeneous functions under rotation. The combined implementation of the two equations, called here extended Euler deconvolution for 2-D structures, gives a more complete source parameter estimation that allows the determination of susceptibility contrast and dip in the cases of contact and thin-sheet sources. This allows for the structural index to be correctly chosen on the basis of a priori knowledge about susceptibility and dip. The pattern of spray solutions emanating from a single source anomaly can be attributed to interfering sources, which have their greatest effect on the flanks of the anomaly. These sprays follow different paths when using either conventional Euler deconvolution or extended Euler deconvolution. The paths of these spray solutions cross and cluster close to the true source location. This intersection of spray paths is used as a discriminant between poor and well-constrained solutions, allowing poor solutions to be eliminated. Extended Euler deconvolution has been tested successfully on 2-D model and real magnetic profile data over contacts and thin dikes.

Journal ArticleDOI
TL;DR: In this article, a two-step inversion procedure using local descent methods was proposed to recover the background velocities of a gas-sand deposit from a waveform inversion of the data.
Abstract: Prestack seismic reflection data contain amplitudes, traveltimes, and moveout information; waveform inversion of such data has the potential to estimate attenuation levels, reflector depths and geometry, and background velocities. However, when inverting reflection data, strong nonlinearities can cause reflectors to be incorrectly imaged and can prevent background velocities from being updated. To successfully recover background velocities, previous authors have resorted to nonlinear, global search inversion techniques. We propose a two-step inversion procedure using local descent methods in which we perform alternate inversions for the reflectors and the background velocities. For our reflector inversion we exploit the efficiency of the back-propagation method when inverting for a large parameter set. For our background velocity inversion we use Newton inverse methods. During the background velocity inversions it is crucial to adaptively depth-stretch the model to preserve the vertical traveltimes. This reduces nonlinearity by largely decoupling the effects of the background velocities and reflectors on the data. Nonlinearity is further reduced by choosing to invert for slownesses and by inverting for a sparse parameter set which is partially defined using geological reflector picks. Applying our approach to shallow seismic data from the North Sea collected over a gas-sand deposit, we demonstrate that the proposed method is able to estimate both the geometry and internal velocity of a significant velocity structure not present in the initial model. Over successive iterations, the use of adaptive depth stretching corrects the pull-down of the base of the gas sand. Vertical background velocity gradients are also resolved. For an insignificant extra cost the acoustic attenuation parameter Q is included in the inversion scheme. The final attenuation tomogram contains realistic values of Q for the expected lithologies and for the effect of partial fluid saturation associated with a shallow bright spot. The attenuation image may also indicate the presence of fracturing.

Journal ArticleDOI
TL;DR: In this paper, a cross-equalization data processing flow for time-lapse seismic data is proposed to attenuate acquisition and processing differences by regridding the two data sets to a common grid; applying a space and time-variant amplitude envelope balance; applying first pass of matched filter corrections for global amplitude, bandwidth, phase and static shift corrections, followed by a dynamic warp to align mispositioned events.
Abstract: Nonrepeatable noise, caused by differences in vintages of seismic acquisition and processing, can often make comparison and interpretation of time-lapse 3-D seismic data sets for reservoir monitoring misleading or futile. In this Gulf of Mexico case study, the major causes of nonrepeatable noise in the data sets are the result of differences in survey acquisition geometry and binning, temporal and spatial amplitude gain, wavelet bandwidth and phase, differential static time shifts, and relative mispositioning of imaged reflection events. We attenuate these acquisition and processing differences by developing and applying a cross-equalization data processing flow for time-lapse seismic data. The cross-equalization flow consists of regridding the two data sets to a common grid; applying a space and time-variant amplitude envelope balance; applying a first pass of matched filter corrections for global amplitude, bandwidth, phase and static shift corrections, followed by a dynamic warp to align mispositioned events; and, finally, running a second pass of constrained space-variant matched filter operators. Difference sections obtained by subtracting the two data sets after each step of the cross-equalization processing flow show a progressive reduction of nonrepeatable noise and a simultaneous improvement in time-lapse reservoir signal.

Journal ArticleDOI
TL;DR: The standard way of highlighting objects is through seismic attribute analysis, where the selected attribute is not sensitive to a particular geologic object but highlights any seismic position with similar attribute response.
Abstract: Modern visualization and image processing techniques are revolutionizing the art of seismic interpretation. Emerging technologies allow us to interpret more data with higher accuracy in less time. The trend is shifting from horizon-based toward volume-based. New insights are gained by studying objects of various geologic origins and their spatial interrelationships. The standard way of highlighting objects is through seismic attribute analysis. Various attributes are tested in a trial-and-error mode, and one is selected as the optimal representation of the desired object. The selected attribute, which may be a mathematical composite of several attributes, is not sensitive to a particular geologic object but highlights any seismic position with similar attribute response.

Journal ArticleDOI
TL;DR: In this paper, a 3D finite-element solution is used to solve controlled-source electromagnetic (EM) induction problems in heterogeneous electrically conducting media, based on a weak formulation of the governing Maxwell equations using Coulomb-gauged EM potentials.
Abstract: A 3-D finite-element solution has been used to solve controlled-source electromagnetic (EM) induction problems in heterogeneous electrically conducting media. The solution is based on a weak formulation of the governing Maxwell equations using Coulomb-gauged EM potentials. The resulting sparse system of linear algebraic equations is solved efficiently using the quasi-minimal residual method with simple Jacobi scaling as a preconditioner. The main aspects of this work include the implementation of a 3-D cylindrical mesh generator with high-quality local mesh refinement and a formulation in terms of secondary EM potentials that eliminates singularities introduced by the source. These new aspects provide quantitative induction-log interpretation for petroleum exploration applications. Examples are given for 1-D, 2-D, and 3-D problems, and favorable comparisons are presented against other, previously published multidimensional EM induction codes. The method is general and can also be adapted for controlled-source EM modeling in mining, groundwater, and environmental geophysics in addition to fundamental studies of EM induction in heterogeneous media.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed the prestack migration deconvolution (MD) filter, which is a layered-medium approximation to the inverse Hessian matrix that is applied locally to the migrated section.
Abstract: Prestack migration in the time and depth domains is the premier tool for seismic imaging of complex structures. Unfortunately, an undersampled acquisition geometry, a limited recording aperture, and strong velocity contrasts lead to uneven illumination of the subsurface. This results in blurring the migrated image, sometimes referred to as acquisition footprint noise in the migrated image. To remedy this blurring partially, we introduce prestack migration deconvolution (MD). The MD filter is a layered-medium approximation to the inverse Hessian matrix that is applied locally to the migrated section. Both synthetic- and field-data results show noticeable improvements in reducing migration artifacts and increasing lateral spatial resolution by more than 10%. The computational expense for constructing the MD filter is related to the MD operator length. Its cost is about the same as that for migration, but opportunities exist for significantly reducing this cost. Results suggest that MD should be applied to migrated sections to optimize image quality.

Journal ArticleDOI
TL;DR: In this paper, a tutorial attempts to clarify two points of the confusion among geophysicists, and it is well known now that gravity anomalies after the free-air correction are still located at their original positions.
Abstract: Geophysics uses gravity to learn about the density variations of the Earth’s interior, whereas classical geodesy uses gravity to define the geoid. This difference in purpose has led to some confusion among geophysicists, and this tutorial attempts to clarify two points of the confusion. First, it is well known now that gravity anomalies after the “free‐air” correction are still located at their original positions. However, the “free‐air” reduction was thought historically to relocate gravity from its observation position to the geoid (mean sea level). Such an understanding is a geodetic fiction, invalid and unacceptable in geophysics. Second, in gravity corrections and gravity anomalies, the elevation has been used routinely. The main reason is that, before the emergence and widespread use of the Global Positioning System (GPS), height above the geoid was the only height measurement we could make accurately (i.e., by leveling). The GPS delivers a measurement of height above the ellipsoid. In principle, in...

Journal ArticleDOI
TL;DR: In this article, permanent deployment of geophones or other acoustic sensors to complement standard engineering gauges is being promoted as a way to map reservoir dynamics, although the deployment of permanent seismic instrumentation is also potentially an ideal route to monitor passive seismicity.
Abstract: With the current industry trend toward instrumented oil fields and smart well completions, the permanent deployment of geophones or other acoustic sensors to complement standard engineering gauges is being promoted as a way to map reservoir dynamics. The biggest push is from the time-lapse seismic practitioners, although the deployment of permanent seismic instrumentation is also potentially an ideal route to monitor passive seismicity.

Journal ArticleDOI
TL;DR: The extended Euler deconvolution algorithm as mentioned in this paper is a generalization and unification of 2D Euler de-deconvolution and Werner deconvolutions, and it can be realized using generalized Hilbert transforms.
Abstract: The extended Euler deconvolution algorithm is shown to be a generalization and unification of 2-D Euler deconvolution and Werner deconvolution. After recasting the extended Euler algorithm in a way that suggests a natural generalization to three dimensions, we show that the 3-D extension can be realized using generalized Hilbert transforms. The resulting algorithm is both a generalization of extended Euler deconvolution to three dimensions and a 3-D extension of Werner deconvolution. At a practical level, the new algorithm helps stabilize the Euler algorithm by providing at each point three equations rather than one. We illustrate the algorithm by explicit calculation for the potential of a vertical magnetic dipole.