scispace - formally typeset
Search or ask a question

Showing papers in "Geophysics in 2003"


Journal ArticleDOI
TL;DR: In this article, a new linearized AVO inversion technique is developed in a Bayesian framework, which is based on the convolutional model and a linearized weak contrast approximation of the Zoeppritz equation.
Abstract: A new linearized AVO inversion technique is developed in a Bayesian framework. The objective is to obtain posterior distributions for P‐wave velocity, S‐wave velocity, and density. Distributions for other elastic parameters can also be assessed—for example, acoustic impedance, shear impedance, and P‐wave to S‐wave velocity ratio. The inversion algorithm is based on the convolutional model and a linearized weak contrast approximation of the Zoeppritz equation. The solution is represented by a Gaussian posterior distribution with explicit expressions for the posterior expectation and covariance; hence, exact prediction intervals for the inverted parameters can be computed under the specified model. The explicit analytical form of the posterior distribution provides a computationally fast inversion method. Tests on synthetic data show that all inverted parameters were almost perfectly retrieved when the noise approached zero. With realistic noise levels, acoustic impedance was the best determined parameter, ...

756 citations


PatentDOI
TL;DR: In this paper, a method of seismic data processing is described in which a particular wavelet is selected from a plurality of wavelets as being most characteristic of a received seismic signal.
Abstract: A method of seismic data processing is described in which a particular wavelet is selected from a plurality of wavelets as being most characteristic of a received seismic signal. A subtracted signal can be determined by subtracting a weighted signal of the particular wavelet from the received signal. From the subtracted signal, an additional particular wavelet can be chosen. The process of subtracting a signal and determining an additional particular one of the plurality of wavelets can be repeated until a criterion is met. The method can be repeated at several depths. The resultant spectral analysis can be used to determine, for example, an absorption coefficient.

605 citations


Journal ArticleDOI
TL;DR: This work presents a method for computing angle-domain common-image gathers from seismic images obtained by depth migration using wavefield continuation, which amounts to a radial-trace transform in the Fourier domain and is equivalent to a slant stack in the space domain.
Abstract: Migration in the angle domain creates seismic images for different reflection angles. We present a method for computing angle-domain common-image gathers from seismic images obtained by depth migration using wavefield continuation. Our method operates on prestack migrated images and produces the output as a function of the reflection angle, not as a function of offset ray parameter as in other alternative approaches. The method amounts to a radial-trace transform in the Fourier domain and is equivalent to a slant stack in the space domain. We obtain the angle gathers using a stretch technique that enables us to impose smoothness through regularization. Several examples show that our method is accurate, fast, robust, and easy to implement. The main anticipated applications of our method are in the areas of migration-velocity analysis and amplitude-versus-angle analysis.

534 citations


Journal ArticleDOI
TL;DR: In this article, a generalized S-transform is presented, in which two prescribed functions of frequency control the scale and shape of the analyzing window, and apply it to determining P-wave arrival time in a noisy seismogram.
Abstract: The S-transform is an invertible time-frequency spectral localization technique which combines elements of wavelet transforms and short-time Fourier transforms. In previous usage, the frequency dependence of the analyzing window of the S-transform has been through horizontal and vertical dilations of a basic functional form, usually a Gaussian. In this paper, we present a generalized S-transform in which two prescribed functions of frequency control the scale and the shape of the analyzing window, and apply it to determining P-wave arrival time in a noisy seismogram. The S-transform is also used as a time-frequency filter; this helps in determining the sign of the P arrival.

452 citations


Journal ArticleDOI
TL;DR: In this paper, a high-resolution microseismic image of a hydraulic fracture stimulation in the Carthage Cotton Valley gas field of east Texas was obtained, showing vertical containment within individual, targeted sands, suggesting little or no hydraulic communication between the discrete perforation intervals simultaneously treated within an 80m section.
Abstract: We produced a high‐resolution microseismic image of a hydraulic fracture stimulation in the Carthage Cotton Valley gas field of east Texas. We improved the precision of microseismic event locations four‐fold over initial locations by manually repicking the traveltimes in a spatial sequence, allowing us to visually correlate waveforms of adjacent sources. The new locations show vertical containment within individual, targeted sands, suggesting little or no hydraulic communication between the discrete perforation intervals simultaneously treated within an 80‐m section. Treatment (i.e., fracture‐zone) lengths inferred from event locations are about 200 m greater at the shallow perforation intervals than at the deeper intervals. The highest quality locations indicate fracture‐zone widths as narrow as 6 m. Similarity of adjacent‐source waveforms, along with systematic changes of phase amplitude ratios and polarities, indicate fairly uniform source mechanisms (fracture plane orientation and sense of slip) over ...

427 citations


Journal ArticleDOI
TL;DR: In this paper, a dynamic relationship among diagenesis, porosity, pore-type, and sonic velocity in carbonate sediments is analyzed, in which compressional wave velocity ranges from 1700 to 6600 m/s and shear-wave velocity from 600 to 3500m/s.
Abstract: Carbonate sediments are prone to rapid and pervasive diagenetic alterations that change the mineralogy and pore structure within carbonate rocks. In particular, cementation and dissolution processes continuously modify the pore structure to create or destroy porosity. In extreme cases these modifications can completely change the mineralogy from aragonite/calcite to dolomite, or reverse the pore distribution whereby original grains are dissolved to produce pores as the original pore space is filled with cement to form the rock (Figure 1). All these modifications alter the elastic properties of the rock and, therefore, the sonic velocity. The result is a dynamic relationship among diagenesis, porosity, pore-type, and sonic velocity. The result is a wide range of sonic velocity in carbonates, in which compressional-wave velocity (VP) ranges from 1700 to 6600 m/s and shear-wave velocity (VS) from 600 to 3500 m/s.

401 citations


Journal ArticleDOI
TL;DR: To preserve the important property of iterative subspace methods of regularizing the solution by the number of iterations, the model weights are incorporated into the operators and can be understood in terms of the singular vectors of the weighted transform.
Abstract: The Radon transform (RT) suffers from the typical problems of loss of resolution and aliasing that arise as a consequence of incomplete information, including limited aperture and discretization. Sparseness in the Radon domain is a valid and useful criterion for supplying this missing information, equivalent somehow to assuming smooth amplitude variation in the transition between known and unknown (missing) data. Applying this constraint while honoring the data can become a serious challenge for routine seismic processing because of the very limited processing time available, in general, per common midpoint. To develop methods that are robust, easy to use and flexible to adapt to different problems we have to pay attention to a variety of algorithms, operator design, and estimation of the hyperparameters that are responsible for the regularization of the solution. In this paper, we discuss fast implementations for several varieties of RT in the time and frequency domains. An iterative conjugate gradient algorithm with fast Fourier transform multiplication is used in all cases. To preserve the important property of iterative subspace methods of regularizing the solution by the number of iterations, the model weights are incorporated into the operators. This turns out to be of particular importance, and it can be understood in terms of the singular vectors of the weighted transform. The iterative algorithm is stopped according to a general cross validation criterion for subspaces. We apply this idea to several known implementations and compare results in order to better understand differences between, and merits of, these algorithms.

351 citations


Journal ArticleDOI
TL;DR: In this paper, the pore-pressure velocity dependence along with the velocity dependence on the confining pressure are phenomenologically described by the following simple relationship (Zimmerman et al., 1986; Eberhart-Phillips et al. 1989; Freund, 1992; Jones, 1992, Prasad and Manghnani, 1997; Khaksar et al, 1999; Carcione and Tinivella, 2001; Kirstetter and MacBeth, 2001).
Abstract: Understanding the stress dependencies of seismic velocities is important for interpreting a variety of seismic data, ranging from amplitude versus offset (AVO) and velocity analysis to overpressure prediction and 4D seismic monitoring of reservoirs. Sometimes, rather complex forms of these dependencies based on specific models of porous space geometry are used. For example, spherical contacts models (Duffy and Mindlin, 1957; Merkel et al., 2001) Duffy Merkel 2001 and crack contacts models (Gangi and Carlson, 1996; Carcione and Tinivella, 2001) have been used in different studies. Gangi96 However, usually the pore-pressure velocity dependence along with the velocity dependence on the confining pressure are phenomenologically described by the following simple relationship (Zimmerman et al., 1986; Eberhart-Phillips et al., 1989; Freund, 1992; Jones, 1992; Prasad and Manghnani, 1997; Khaksar et al., 1999; Carcione and Tinivella, 2001; Kirstetter and MacBeth, 2001)

271 citations



Journal ArticleDOI
TL;DR: In this paper, a general formula is derived for fluid-factor discrimination given that both the P and S impedances are available, which can be expressed with either Lame constants and density, or the bulk and shear moduli and density.
Abstract: This analysis draws together basic rock physics, amplitude variations with offset (AVO), and seismic amplitude inversion to discuss how fluid‐factor discrimination can be performed using prestack seismic data. From both Biot and Gassmann theories for porous, fluid‐saturated rocks, a general formula is first derived for fluid‐factor discrimination given that both the P and S impedances are available. In essence, the two impedances are transformed so that they better differentiate between the fluid and rock matrix of the porous medium. This formula provides a more sensitive discriminator of the pore‐fluid saturant than the acoustic impedance and is especially applicable in hard‐rock environments. The formulation can be expressed with either the Lame constants and density, or the bulk and shear moduli and density. Numerical and well‐log examples illustrate the applicability of this approach. AVO inversion results are then incorporated to show how this method can be implemented using prestack seismic data. Fi...

260 citations


Journal ArticleDOI
TL;DR: The most commonly used technique for doing this involves the application of Gassmann's equations as mentioned in this paper, which is an important part of seismic attribute work, because it provides the interpreter with a tool for modeling and quantifying the various fluid scenarios which might give rise to an observed amplitude variation with offset (AVO) or 4D response.
Abstract: Fluid substitution is an important part of seismic attribute work, because it provides the interpreter with a tool for modeling and quantifying the various fluid scenarios which might give rise to an observed amplitude variation with offset (AVO) or 4D response. The most commonly used technique for doing this involves the application of Gassmann's equations.Modeling the changes from one fluid type to another requires that the effects of the starting fluid first be removed prior to modeling the new fluid. In practice, the rock is drained of its initial pore fluid, and the moduli (bulk and shear) and bulk density of the porous frame are calculated. Once the porous frame properties are properly determined, the rock is saturated with the new pore fluid, and the new effective bulk modulus and density are calculated.A direct result of Gassmann's equations is that the shear modulus for an isotropic material is independent of pore fluid, and therefore remains constant during the fluid substitution process. In the...

Journal ArticleDOI
TL;DR: In the calculation of Bouguer and isostatic gravity anomalies, which are widely used for geological studies, the gravitational effect of the earth mass between an observation site and a datum is calculated assuming an infinite, flat slab of a specified density that approximates the mass as mentioned in this paper.
Abstract: One of the most widely recognized parameters in solid-earth geophysics is the assumed density of the surface rocks of the continental crust, 2.67 g/cm 3 , or 2670 kg/m 3 in today9s preferred SI units. In the calculation of Bouguer and isostatic gravity anomalies, which are widely used for geological studies, the gravitational effect of the earth mass between an observation site and a datum is calculated assuming an infinite, flat slab of a specified density that approximates the mass. The datum for most surveys is sea level or, in the case of surveys of limited areas, it may be the elevation of the lowest station in the survey. Although the choice of datum is arbitrary, commonly sea level is used in an attempt to standardize anomalies from different surveys. Similarly, a density of 2.67 g/cm 3 is generally assumed for the density of the included mass in regional gravity surveys.

Journal ArticleDOI
TL;DR: In this article, an acoustic migration/inversion algorithm that uses extended double square root wave-equation migration and modeling operators to minimize a constrained least square data misfit function (least-squares migration) is presented.
Abstract: We present an acoustic migration/inversion algorithm that uses extended double‐square‐root wave‐equation migration and modeling operators to minimize a constrained least‐squares data misfit function (least‐squares migration). We employ an imaging principle that allows for the extraction of ray‐parameter‐domain common image gathers (CIGs) from the propagated seismic wavefield. The CIGs exhibit amplitude variations as a function of half‐offset ray parameter (AVP) closely related to the amplitude variation with reflection angle (AVA). Our least‐squares wave‐equation migration/inversion is constrained by a smoothing regularization along the ray parameter. This approach is based on the idea that rapid amplitude changes or discontinuities along the ray parameter axis result from noise, incomplete wavefield sampling, and numerical operator artifacts. These discontinuities should therefore be penalized in the inversion.The performance of the proposed algorithm is examined with two synthetic examples. In the first...

Journal ArticleDOI
TL;DR: The use of automatic seismic facies classification techniques has been steadily increasing within E&P interpretation workflows over the past 10 years as discussed by the authors and it is not yet considered a standard procedure but, with the knowledge of the advantages (and limitations) of the different seismic classification methods, its role in the interpretation process as a successful hydrocarbon prediction tool is anticipated to grow.
Abstract: The use of automatic seismic facies classification techniques has been steadily increasing within E&P interpretation workflows over the past 10 years. It is not yet considered a standard procedure but, with the knowledge of the advantages (and limitations) of the different seismic classification methods, its role in the interpretation process as a successful hydrocarbon prediction tool is anticipated to grow. This paper reviews and compares the unsupervised classification methods presently used in seismic facies analysis: K-means clustering, principal component analysis (PCA), projection pursuit, and neural networks (vector quantization and Kohonen self-organizing maps). The term “unsupervised” covers all classification techniques relying only on input data and not biased by the desired output. These methods are described, compared, and illustrated by case studies taken from deep offshore Louisiana, west and south Texas, onshore California, and offshore Indonesia. Seismic data volumes are huge and consist of highly redundant data. It is now established that their analysis can be greatly optimized by applying efficient data reduction algorithms that preserve essential features of the seismic character. The objective of the facies classification process is to describe enough variability of the seismic data to reveal details of the underlying geologic features. The classification process should do this while preserving a synthesis for the seismic signal changes. In addition to comparing the classification methods, this paper presents relevant information pertaining to data input, data analysis, and data output. Seismic data, from the statistical point of view, have characteristics like continuity, redundancy, and noise that affect the behavior of the seismic classification techniques. At first glance, these classification techniques may appear similar. However, their ability to efficiently describe changes in seismic character and their interpretability can vary significantly. These different techniques are compared in this paper for their suitability to describe seismic data in a meaningful manner directly …

Journal ArticleDOI
TL;DR: In this article, the authors proposed to minimize the Huber function with a quasi-Newton method that has the potential of being faster and more robust than conjugate-gradient methods when solving nonlinear problems.
Abstract: The “Huber function” (or “Huber norm” ) is one of several robust error measures which interpolates between smooth (l2) treatment of small residuals and robust (l1) treatment of large residuals. Since the Huber function is differentiable, it may be minimized reliably with a standard gradient‐based optimizer. We propose to minimize the Huber function with a quasi‐Newton method that has the potential of being faster and more robust than conjugate‐gradient methods when solving nonlinear problems. Tests with a linear inverse problem for velocity analysis with both synthetic and field data suggest that the Huber function gives far more robust model estimates than does a least‐squares fit with or without damping.

Journal ArticleDOI
TL;DR: In this paper, the authors present an algorithm that simultaneously inverts susceptibility-affected data for 1D conductivity and susceptibility models, enabling reliable conductivity models to be constructed and can give useful information about the distribution of susceptibility in the earth.
Abstract: Magnetic susceptibility affects electromagnetic (EM) loop–loop observations in ways that cannot be replicated by conductive, nonsusceptible earth models. The most distinctive effects are negative in‐phase values at low frequencies. Inverting data contaminated by susceptibility effects for conductivity alone can give misleading models: the observations strongly influenced by susceptibility will be underfit, and those less strongly influenced will be overfit to compensate, leading to artifacts in the model. Simultaneous inversion for both conductivity and susceptibility enables reliable conductivity models to be constructed and can give useful information about the distribution of susceptibility in the earth. Such information complements that obtained from the inversion of static magnetic data because EM measurements are insensitive to remanent magnetization.We present an algorithm that simultaneously inverts susceptibility‐affected data for 1D conductivity and susceptibility models. The solution is obtaine...

Journal ArticleDOI
TL;DR: In this article, a finite-difference scheme for the electromagnetic field in 3D anisotropic media for electromagnetic logging was proposed, which has the following features: coercivity (i.e., the complete discrete analogy of all continuous equations in every grid cell, even for nondiagonal conductivity tensors), a special conductivity averaging, and a spectrally optimal grid refinement minimizing the error at the receiver locations and optimizing the approximation of the boundary conditions at infinity.
Abstract: We consider a problem of computing the electromagnetic field in 3D anisotropic media for electromagnetic logging. The proposed finite-difference scheme for Maxwell equations has the following new features based on some recent and not so recent developments in numerical analysis: coercivity (i.e., the complete discrete analogy of all continuous equations in every grid cell, even for nondiagonal conductivity tensors), a special conductivity averaging that does not require the grid to be small compared to layering or fractures, and a spectrally optimal grid refinement minimizing the error at the receiver locations and optimizing the approximation of the boundary conditions at infinity. All of these features significantly reduce the grid size and accelerate the computation of electromagnetic logs in 3D geometries without sacrificing accuracy.

Journal ArticleDOI
TL;DR: Active constraint balancing (ACB) as mentioned in this paper tries to balance the constraints of the least square inversion according to sensitivity for a given problem so that it enhances the resolution as well as the stability of the inversion process.
Abstract: Most geophysical inverse problems are solved using least‐squares inversion schemes with damping or smoothness constraints to improve stability and convergence rate. Since the Lagrangian multiplier controls resolution and stability of the inverse problem, we always want to use the optimum multiplier, which is not easy to get and is usually obtained by experience or a time‐consuming optimization process.We present a new regularization approach, in which the Lagrangian multiplier is set as a spatial variable at each parameterized block and automatically determined via the parameter resolution matrix and spread function analysis. For highly resolvable parameters, a small value of the Lagrangian multiplier is assigned, and vice versa. This approach, named “active constraint balancing” (ACB), tries to balance the constraints of the least‐squares inversion according to sensitivity for a given problem so that it enhances the resolution as well as the stability of the inversion process. We demonstrate the performa...

Journal ArticleDOI
Necati Gülünay1
TL;DR: In this article, a data adaptive interpolation method is designed and applied in the Fourier transform domain (f•k or f•kx•ky) for spatially aliased data.
Abstract: A data adaptive interpolation method is designed and applied in the Fourier transform domain (f‐k or f‐kx‐ky for spatially aliased data. The method makes use of fast Fourier transforms and their cyclic properties, thereby offering a significant cost advantage over other techniques that interpolate aliased data.The algorithm designs and applies interpolation operators in the f‐k (or f‐kx‐ky domain to fill zero traces inserted in the data in the t‐x (or t‐x‐y) domain at locations where interpolated traces are needed. The interpolation operator is designed by manipulating the lower frequency components of the stretched transforms of the original data. This operator is derived assuming that it is the same operator that fills periodically zeroed traces of the original data but at the lower frequencies, and corresponds to the f‐k (or f‐kx‐ky domain version of the well‐known f‐x (or f‐x‐y) domain trace interpolators.The method is applicable to 2D and 3D data recorded sparsely in a horizontal plane. The most comm...

Journal ArticleDOI
TL;DR: In this paper, converted seismic waves (specifically, downgoing P-waves that convert on reflection to upcoming S-waves) are increasingly being used to explore for subsurface targets.
Abstract: Converted seismic waves (specifically, downgoing P‐waves that convert on reflection to upcoming S‐waves are increasingly being used to explore for subsurface targets. Rapid advancements in both land and marine multicomponent acquisition and processing techniques have led to numerous applications for P‐S surveys. Uses that have arisen include structural imaging (e.g., “seeing” through gas‐bearing sediments, improved fault definition, enhanced near‐surface resolution), lithologic estimation (e.g., sand versus shale content, porosity), anisotropy analysis (e.g., fracture density and orientation), subsurface fluid description, and reservoir monitoring. Further applications of P‐S data and analysis of other more complicated converted modes are developing.

Journal ArticleDOI
TL;DR: In this article, a nonsplitting perfectly matched layer (NPML) method for the finite-difference simulation of elastic wave propagation is presented. But the NPML requires nearly the same amount of computer storage as does the split-field approach.
Abstract: In this paper, we present a nonsplitting perfectly matched layer (NPML) method for the finite-difference simulation of elastic wave propagation. Compared to the conventional split-field approach, the new formulation solves the same set of equations for the boundary and interior regions. The nonsplitting formulation simplifies the perfectly matched layer (PML) algorithm without sacrificing the accuracy of the PML. In addition, the NPML requires nearly the same amount of computer storage as does the split-field approach. Using the NPML, we calculate dipole and quadrupole waveforms in a logging-while-drilling environment. We show that a dipole source produces a strong pipe flexural wave that distorts the formation arrivals of interest. A quadrupole source, however, produces clean formation arrivals. This result indicates that a quadrupole source is more advantageous over a dipole source for shear velocity measurement while drilling.

Journal ArticleDOI
TL;DR: In this paper, a normalization scheme for wave-equation migration algorithms that compensates for irregular illumination is proposed, which can take into account reflector dip as well as both shot and receiver geometry.
Abstract: Illumination problems caused by finite‐recording aperture and lateral velocity lensing can bias amplitudes in migration results. In this paper, I develop a normalization scheme appropriate for wave‐equation migration algorithms that compensates for irregular illumination. I generate synthetic seismic data over a reference reflectivity model, using the adjoint of wave‐equation shot‐profile migration as the forward modeling operator. I then migrate the synthetic data with the same shot‐profile algorithm. The ratio between the synthetic migration result and the initial reference model is a measure of seismic illumination. Dividing the true data migration result by this illumination function mitigates the illumination problems. The methodology can take into account reflector dip as well as both shot and receiver geometries, and, because it is based on wave‐equation migration, it naturally models the finite‐frequency effects of wave propagation. The reference model should be as close to the true model as possi...

Journal ArticleDOI
TL;DR: In this article, a new automatic method of interpretation of magnetic data, called AN-EUL (pronounced "an oil") is presented, which is based on a combination of the analytic signal and the Euler deconvolution methods.
Abstract: We present a new automatic method of interpretation of magnetic data, called AN-EUL (pronounced “an oil”). The derivation is based on a combination of the analytic signal and the Euler deconvolution methods. With AN-EUL, both the location and the approximate geometry of a magnetic source can be deduced. The method is tested using theoretical simulations with different magnetic models placed at different depths with respect to the observation height. In all cases, the method estimated the locations and the approximate geometries of the sources. The method is tested further using ground magnetic data acquired above a shallow geological dike whose source parameters are known from drill logs, and also from airborne magnetic data measured over a known ferrometallic object. In both these cases, the method correctly estimated the locations and the nature of these sources.

PatentDOI
TL;DR: In this paper, a four-component cross-dipole data set measured in a deviated borehole in combination with the directionality of the compressional waves in the dipole data gives the orientation of bed boundaries crossing the borehole.
Abstract: Directional acoustic measurements made in the borehole are used for imaging a near-borehole geological formation structure and determination of its orientation. Four-component cross-dipole data set measured in a deviated borehole in combination with the directionality of the compressional waves in the dipole data give the orientation of bed boundaries crossing the borehole. The low-frequency content (2˜3 kHz) of the data allows for imaging the radial extent of the formation structure up to 15 m, greatly enhancing the penetration depth as compared to that obtained using conventional monopole compressional-wave data. A combination monopole/dipole arrangement of sources and receivers may also be used for imaging of bed boundaries.

Journal ArticleDOI
TL;DR: In this article, the authors used a poroelastic modeling algorithm to compute numerical experiments of wave propagation in White's partial saturation model, and compared the results with the theoretical predictions.
Abstract: We use a poroelastic modeling algorithm to compute numerical experiments of wave propagation in White’s partial saturation model. The results are then compared to the theoretical predictions. The model consists of a homogeneous sandstone saturated with brine and spherical gas pockets. White’s theory predicts a relaxation mechanism, due to pressure equilibration, causing attenuation and velocity dispersion of the wavefield. We vary gas saturation either by increasing the radius of the gas pocket or by increasing the density of gas bubbles. Despite that the modeling is two dimensional and interaction between the gas pockets is neglected in White’s model, the numerical results show the trends predicted by the theory. In particular, we observe a similar increase in velocity at high frequencies (and low permeabilities). Furthermore, the behavior of the attenuation peaks versus water saturation and frequency is similar to that of White’s model. The modeling results show more dissipation and higher velocities than White’s model due to multiple scattering and local fluid-flow effects. The conversion of fast P-wave energy into dissipating slow waves at the patches is the main mechanism of attenuation. Differential motion between the rock skeleton and the fluids is highly enhanced by the presence of fluid/fluid interfaces and pressure gradients generated through them.

Journal ArticleDOI
Dengliang Gao1
TL;DR: Case studies indicate that the VCM texture extraction method helps visualize and detect major structural and stratigraphic features that are fundamental to robust seismic interpretation and successful hydrocarbon exploration.
Abstract: Visual inspection of poststack seismic image patterns is effective in recognizing large-scale seismic features; however, it is not effective in extracting quantitative information to visualize, detect, and map seismic features in an automatic and objective manner. Although conventional seismic attributes have significantly enhanced interpreters' ability to quantify seismic visualization and interpretation, very few attributes are published to characterize both intratrace and intertrace relationships of amplitudes from a three-dimensional (3D) perspective. These relationships are fundamental to the characterization and identification of certain geological features. Here, I present a volume texture extraction method to overcome these limitations. In a two-dimensional (2D) image domain where data samples are visualized by pixels (picture elements), a texture has been typically characterized based on a planar texel (textural element) using a gray level co-occurrence matrix. I extend the concepts to a 3D seismic domain, where reflection amplitudes are visualized by voxels (volume picture elements). By evaluating a voxel co-occurrence matrix (VCM) based on a cubic texel at each of the voxel locations, the algorithm extracts a plurality of volume textural attributes that are difficult to obtain using conventional seismic attribute extraction algorithms. Case studies indicate that the VCM texture extraction method helps visualize and detect major structural and stratigraphic features that are fundamental to robust seismic interpretation and successful hydrocarbon exploration.

Journal ArticleDOI
TL;DR: In this paper, a fast, powerful numerical scheme was proposed to compute poroelastic solutions for excess pore pressure and displacements in a multilayered half-space.
Abstract: We present a fast, powerful numerical scheme to compute poroelastic solutions for excess pore pressure and displacements in a multilayered half-space. The solutions are based on the mirror-image technique and use an extension of Haskell's propagator method. They can be applied to assess in-situ formation parameters from the surface deformation field when fluids are injected into or extracted from a subsurface reservoir, or they can be used to simulate changes in pore-fluid pressure resulting from matrix displacements induced by an earthquake. The performance of the numerical scheme is tested through comparison with observations of the surface deformation as recorded by tiltmeters in the vicinity of an iteratively pumped well. Modeling of near-surface tilt data around a productive well is useful in constraining hydraulic diffusivity in the layered subsurface.

Journal ArticleDOI
TL;DR: In this paper, an acoustic wave equation for orthorhombic media was derived using dispersion relation derived under the acoustic medium assumption, which accurately describes the kinematics of P-waves.
Abstract: Using a dispersion relation derived under the acoustic medium assumption, I obtain an acoustic wave equation for orthorhombic media. Although an acoustic wave equation does not strictly describe a wave in anisotropic media, it accurately describes the kinematics of P-waves. The orthorhombic acoustic wave equation, unlike the transversely isotropic one, is a sixth-order equation with three sets of complex conjugate solutions. Only one set of these solutions are perturbations of the familiar acoustic wavefield solution for isotropic media for incoming and outgoing P-waves and, thus, are of interest here. The other two sets of solutions are simply the result of this artificially derived sixth-order equation.

Journal ArticleDOI
TL;DR: In this article, a surface fitting approach, which involves analyzing azimuthal variations in AVO gradients, is used to estimate the orientation and magnitude of the fracture-induced anisotropy.
Abstract: The delineation and characterization of fracturing is important in the successful exploitation of many hydrocarbon reservoirs. Such fracturing often occurs in preferentially aligned sets; if the fractures are of subseismic scale, this may result in seismic anisotropy. Thus, measurements of anisotropy from seismic data may be used to delineate fracture patterns and investigate their properties. Here fracture-induced anisotropy is investigated in the Valhall field, which lies in the Norwegian sector of the North Sea. This field is a chalk reservoir with good porosity but variable permeability, where fractures may significantly impact production, e.g., during waterflooding. To investigate the nature of fracturing in this reservoir, P-wave amplitude variation with offset and azimuth (AVOA) is analyzed in a 3D ocean-bottom cable (OBC) data set. In general, 3D ocean-bottom seismic (OBS) acquisition leads to patchy coverage in offset and azimuth, and this must be addressed when considering such data. To overcome this challenge and others associated with 3D OBS acquisition, a new method for processing and analysis is presented. For example, a surface fitting approach, which involves analyzing azimuthal variations in AVO gradients, is used to estimate the orientation and magnitude of the fracture-induced anisotropy. This approach is also more widely applicable to offset-azimuth analysis of other attributes (e.g., traveltimes) and any data set where there has been true 3D data acquisition, land or marine. Using this new methodology, we derive high-resolution maps of P-wave anisotropy from the AVOA analysis for the top-chalk reflection at Valhall. These anisotropy maps show coherent but laterally varying trends. Synthetic AVOA modeling, using effective medium models, indicates that if this anisotropy is from aligned fracturing, the fractures are likely liquid filled with small aspect ratios and the fracture density must be high. Furthermore, we show that the fracture-normal direction is parallel to the direction of most positive AVO gradient. In other situations the reverse can be true, i.e., the fracture-normal direction can be parallel to the direction of the most negative AVO gradient. Effective medium modeling or comparisons with anisotropy estimates from other approaches (e.g., azimuthal variations in velocity) must therefore be used to resolve this ambiguity. The inferred fracture orientations and anisotropy magnitudes show a degree of correlation with the positions and alignments of larger scale faults, which are estimated from 3D coherency analysis. Overall, this work demonstrates that significant insight may be gained into the alignment and character of fracturing and the stress field variations throughout a field using this high-resolution AVOA method.

Journal ArticleDOI
TL;DR: In this paper, the authors describe a method for interpreting gravity and magnetic data in terms of 3D structures, using a large number of prisms, with the depths to the tops and bottoms as unknowns to be determined by optimization.
Abstract: We describe the implementation of a versatile method for interpreting gravity and magnetic data in terms of 3D structures. The algorithm combines a number of features that have proven useful in other algorithms. To accommodate structures of arbitrary geometry, we define the subsurface using a large number of prisms, with the depths to the tops and bottoms as unknowns to be determined by optimization. Included in the optimization process are the three components of the magnetization vector and the density contrast, which is assumed to be a continuous function with depth. We use polynomial variations of the density contrast to simulate the natural increase of rock density with depth in deep sedimentary basins. The algorithm minimizes the quadratic norm of residuals combined with a regularization term. This term controls the roughness of the upper and lower topographies defined by the prisms. This results in simple shapes by penalizing the norms of the first and second horizontal derivatives of the prism depths and bottoms. Finally, with the use of quadratic programming, it is a simple matter to include a priori information about the model in the form of equality or inequality constraints. The method is first tested using a hypothetical model, and then it is used to estimate the geometry of the Ensenada basin by means of joint inversion of land and offshore gravity and land, offshore, and airborne magnetic data. The inversion helps constrain the structure of the basin and helps extend the interpretation of known surface faults to the offshore.