scispace - formally typeset
Search or ask a question

Showing papers in "Geophysics in 2008"


Journal ArticleDOI
TL;DR: The source domain is poorly sampled because the time intervals between shots are sufficiently large to avoid the tail of the previous source response interfering with the next one.
Abstract: Seismic acquisition surveys are designed such that the time intervals between shots are sufficiently large to avoid the tail of the previous source response interfering with the next one (zero overlap in time). To economize on survey time and processing effort, the current compromise is to keep the number of shots to some acceptable minimum. The result is that in current practice the source domain is poorly sampled.

389 citations


Journal ArticleDOI
TL;DR: In this paper, a new methodology, spatially constrained inversion (SCI), is proposed to produce quasi-3D conductivity modeling of electromagnetic (EM) data using a 1D forward solution.
Abstract: We present a new methodology, spatially constrained inversion (SCI), that produces quasi-3D conductivity modeling of electromagnetic (EM) data using a 1D forward solution. Spatial constraints are set between the model parameters of nearest neighboring soundings. Data sets, models, and spatial constraints are inverted as one system. The constraints are built using Delaunay triangulation, which ensures automatic adaptation to data density variations. Model parameter information migrates horizontally through spatial constraints, increasing the resolution of layers that would be poorly resolved locally. SCI produces laterally smooth results with sharp layer boundaries that respect the 3D geological variations of sedimentary settings. SCI also suppresses the elongated artifacts commonly seen in interpretation results of profile-oriented data sets. In this study, SCI is applied to airborne time-domain EM data, but it can also be implemented with other ground-based or airborne data types.

318 citations


Journal ArticleDOI
Marta Woodward1, Dave Nichols1, Olga Zdraveva1, Phil Whitfield1, Tony Johns1 
TL;DR: In this article, a ray-based postmigration grid tomography has become the standard model-building tool for seismic depth imaging, which can be used to tie PZ and PS images in depth.
Abstract: Over the past 10 years, ray-based postmigration grid tomography has become the standard model-building tool for seismic depth imaging. While the basics of the method have remained unchanged since the late 1990s, the problems it solves have changed dramatically. This evolution has been driven by exploration demands and enabled by computer power. There are three main areas of change. First, standard model resolution has increased from a few thousand meters to a few hundred meters. This order of magnitude improvement may be attributed to both high-quality, complex residual-moveout data picked as densely as 25 m to 50 m vertically and horizontally, and to a strategy of working down from long-wavelength to short-wavelength solutions. Second, more and more seismic data sets are being acquired along multiple azimuths, for improved illumination and multiple suppression. High-resolution velocity tomography must solve for all azimuths simultaneously, to prevent short-wavelength velocity heterogeneity from being mistaken for azimuthal anisotropy. Third, there has been a shift from predominantly isotropic to predominantly anisotropic models, both VTI and TTI. With four-component data, anisotropic grid tomography can be used to build models that tie PZ and PS images in depth.

317 citations


Journal ArticleDOI
TL;DR: In this article, a boundary condition for multicomponent data is proposed for wave field migration in isotropic media, where the vertical and horizontal components of the data are taken as proxies for the P- and S-wave modes, which are imaged independently with the acoustic wave equations.
Abstract: Multicomponent data usually are not processed with specifically designed procedures but with procedures analogous to those used for single-component data. In isotropic media, the vertical and horizontal components of the data commonly are taken as proxies for the P- and S-wave modes, which are imaged independently with the acoustic wave equations.This procedure works only if the vertical and horizontal components accurately represent P- and S-wave modes, which generally is not true. Therefore, multicomponent images constructed with this procedure exhibit artifacts caused by incorrect wave-mode separation at the surface.An alternative procedure for elastic imaging uses the full vector fields for wavefield reconstruction and imaging. Thewavefieldsarereconstructedusingthemulticomponentdata as a boundary condition for a numerical solution to the elastic wave equation. The key component for wavefield migration is theimagingcondition,whichevaluatesthematchbetweenwavefields reconstructed from sources and receivers. For vector wave fields, a simple component-by-component crosscorrelation between two wavefields leads to artifacts caused by crosstalk between the unseparated wave modes. We can separate elastic wavefields after reconstruction in the subsurface and implement theimagingconditionascrosscorrelationofpurewavemodesinstead of the Cartesian components of the displacement wavefield.Thisapproachleadstoimagesthatareeasiertointerpretbecause they describe reflectivity of specified wave modes at interfaces of physical properties.As for imaging with acoustic wavefields, the elastic imaging condition can be formulated conventionally crosscorrelation with zero lag in space and time and extendedtononzerospaceandtimelags.Theelasticimagesproduced by an extended imaging condition can be used for angle decomposition of primary PP or SS and converted PS or SP reflectivity. Angle gathers constructed with this procedure have applicationsformigrationvelocityanalysisandamplitude-variation-with-angleanalysis.

309 citations


Journal ArticleDOI
TL;DR: In this article, a 2.5D fast and rigorous forward and inversion algorithm for deep electromagnetic (EM) applications that include crosswell and controlled-source EM measurements is presented.
Abstract: We present 2.5D fast and rigorous forward and inversion algorithms for deep electromagnetic (EM) applications that include crosswell and controlled-source EM measurements. The forward algorithm is based on a finite-difference approach in which a multifrontal LU decomposition algorithm simulates multisource experiments at nearly the cost of simulating one single-source experiment for each frequency of operation. When the size of the linear system of equations is large, the use of this noniterative solver is impractical. Hence, we use the optimal grid technique to limit the number of unknowns in the forward problem. The inversion algorithm employs a regularized Gauss-Newton minimization approach with a multiplicative cost function. By using this multiplicative cost function, we do not need a priori data to determine the so-called regularization parameter in the optimization process, making the algorithm fully automated. The algorithm is equipped with two regularization cost functions that allow us to reconstruct either a smooth or a sharp conductivity image. To increase the robustness of the algorithm, we also constrain the minimization and use a line-search approach to guarantee the reduction of the cost function after each iteration. To demonstrate the pros and cons of the algorithm, we present synthetic and field data inversion results for crosswell and controlled-source EM measurements.

280 citations


Journal ArticleDOI
TL;DR: In this paper, a time-domain, plane-wave implementation of 3D waveform inversion was studied and it was shown that plane-Wave gathers gathers more velocities than conventional waveform-inversion.
Abstract: Prestack depth migration has been used for decades to derive velocity distributions in depth. Numerous tools and methodologies have been developed to reach this goal. Exploration in geologically more complex areas exceeds the abilities of existing methods. New data-acquisition and data-processing methods are required to answer these new challenges effectively. The recently introduced wide-azimuth data acquisition method offers better illumination and noise attenuation as well as an opportunity to more accurately determine velocities for imaging. One of the most advanced tools for depth imaging is full-waveform inversion. Prestack seismic full-waveform inversion is very challenging because of the nonlinearity and nonuniqueness of the solution. Combined with multiple iterations of forward modeling and residual wavefield back propagation, the method is computer intensive, especially for 3D projects. We studied a time-domain, plane-wave implementation of 3D waveform inversion. We found that plane-wave gathers...

278 citations


Journal ArticleDOI
TL;DR: In this paper, a new discrete under-sampling scheme called jittered sub-Nyquist sampling (JSS) is proposed for wavefield reconstruction with sparsity-promoting inversion with transform elements localized in the Fourier domain.
Abstract: We present a new, discrete undersampling scheme designed to favor wavefield reconstruction by sparsity-promoting inversion with transform elements localized in the Fourier domain. The work is motivated by empirical observations in the seismic community, corroborated by results from compressive sampling, that indicate favorable (wavefield) reconstructions from random rather than regular undersampling. Indeed, random undersampling renders coherent aliases into harmless incoherent random noise, effectively turning the interpolation problem into a much simpler denoising problem. A practical requirement of wavefield reconstruction with localized sparsifying transforms is the control on the maximum gap size. Unfortunately, random undersampling does not provide such a control. Thus, we introduce a sampling scheme, termed jittered undersampling, that shares the benefits of random sampling and controls the maximum gap size. The contribution of jittered sub-Nyquist sampling is key in formu-lating a versatile wavefi...

275 citations


Journal ArticleDOI
TL;DR: In this paper, the authors described an implementation of differential semblance velocity analysis based on shot-profile migration, and illustrated its ability to estimate complex, strongly refracting velocity fields.
Abstract: We described an implementation of differential semblance velocity analysis based on shot-profile migration, and illustrated its ability to estimate complex, strongly refracting velocity fields. The differential semblance approach to velocity analysis uses waveform data directly: it does not require any sort of traveltime picking. The objective function minimized by the differential semblance algorithm can measure either focusing of the image in offset or flatness of the image in (scattering) angle. We showed that the offset variant of differential semblance yields somewhat more reliable migration velocity estimates than does the scattering angle variant, and we explain why this is so. We observed that inconsistency with the underlying model (Born scattering about a transparent background) can lead to degraded velocity estimates from differential semblance, and we showed how to augment the objective function with stack power to enhance ultimate accuracy. A 2D marine survey over a target obscured by the lensing effects of a gas chimney provides an opportunity for direct comparison of differential semblance with reflection tomography. The differential semblance estimate yields a more data-consistent model (flatter angle gathers) than does reflection tomography in this application, resulting in a more interpretable image below the gas cloud.

243 citations


Journal ArticleDOI
TL;DR: In this paper, a new method for interpretation of gridded magnetic data is proposed based on derivatives of the tilt angle, which provides a simple linear equation, similar to the 3D Euler equation.
Abstract: We have developed a new method for interpretation of gridded magnetic data which, based on derivatives of the tilt angle, provides a simple linear equation, similar to the 3D Euler equation. Our method estimates both the horizontal locationandthedepthofmagneticbodies,butwithoutspecifying prior information about the nature of the sources structural index. Using source-position estimates, the nature of the source can then be inferred.Theoretical simulations over simple and complex magnetic sources that give rise to noisecorrupted and noise-free data, illustrate the ability of the method to provide source locations and index values characterizingthenatureofthesourcebodies.Ourmethodusessecondderivativesofthemagneticanomaly,whicharesensitive to noise high-wavenumber spectral content in the data. Thus, an upward continuation of the anomaly may lead to reduce the noise effect. We demonstrate the practical utility of the method using a field example from Namibia, where the results of the proposed method show broad correlation with previous results using interactive forward modeling.

234 citations


Journal ArticleDOI
TL;DR: In this article, an effective approach to attenuate random and coherent linear noise in a 3D data set from a carbonate environment is discussed, where the authors demonstrate a seismic inline section from a noisy 3D seismic cube.
Abstract: This paper discusses an effective approach to attenuate random and coherent linear noise in a 3D data set from a carbonate environment. Figure 1 illustrates a seismic inline section from a noisy 3D seismic cube. Clearly, the section in Figure 1 is corrupted by undesirable random noise and coherent noise that are linear and vertically dipping in nature

225 citations


Journal ArticleDOI
TL;DR: In this article, the Normalized Standard Deviation (NSTD) filter is proposed for edge enhancement in potential-field data, which is based on ratios of the windowed standard deviation of derivatives of the field.
Abstract: Edge enhancement in potential-field data helps geologic interpretation. There are many methods for enhancing edges, most of which are high-pass filters based on the horizontal or vertical derivatives of the field. Normalized standard deviation (NSTD), a new edge-detection filter, is based on ratios of the windowed standard deviation of derivatives of the field. NSTD is demonstrated using aeromagnetic data from Australia and gravity data from South Africa. Compared with other filters, the NSTD filter produces more detailed results.

Journal ArticleDOI
TL;DR: In this paper, a spectral inversion method was proposed to invert frequency spectra for layer thickness and apply it to synthetic and real data using complex spectral analysis, which improved the imaging of subtle stratigraphic features.
Abstract: Spectralinversionisaseismicmethodthatusesaprioriinformation and spectral decomposition to improve images of thinlayerswhosethicknessesarebelowthetuningthickness. We formulate a method to invert frequency spectra for layer thickness and apply it to synthetic and real data using complexspectralanalysis.Absolutelayerthicknessessignificantly below the seismic tuning thickness can be determined robustly in this manner without amplitude calibration. We extend our method to encompass a generalized reflectivity series represented by a summation of impulse pairs. Application of our spectral inversion to seismic data sets from the GulfofMexicoresultsinreliablewelltiestoseismicdata,accurate prediction of layer thickness to less than half the tuning thickness, and improved imaging of subtle stratigraphic features. Comparisons between well ties for spectrally inverted data and ties for conventional seismic data illustrate the superior resolution of the former. Several stratigraphic examples illustrate the various destructive effects of the wavelet, including creating illusory geologic information, suchasfalsestratigraphictruncationsthatarerelatedtolateralchangesinrockproperties,andmaskinggeologicinformation,suchasupdiplimitsofthinlayers.Weconcludethatdata that are inverted spectrally on a trace-by-trace basis show greater bedding continuity than do the original seismic data, suggestingthatwaveletside-lobeinterferenceproducesfalse beddingdiscontinuities.

Journal ArticleDOI
TL;DR: To obtain accurate image amplitudes, source- and receiver-wavefield extrapolations must be able to accurately reconstruct their respective wavefields at the target reflector and have the correct angle dependence, scale factor, and sign and the required (dimensionless) units.
Abstract: Numerical implementations of six imaging conditions for prestack reverse-time migration show widely differing ability to provide accurate, angle-dependent estimates of reflection coefficients. Evaluation is in the context of a simple, one-interface acoustic model. Only reflection coefficients estimated by normalization of a crosscorrelation image by source illumination or by receiver-/source-wavefield amplitude ratio have the correct angle dependence, scale factor, and sign and the required (dimensionless) units; thus, these are the preferred imaging-condition algorithms. To obtain accurate image amplitudes, source- and receiver-wavefield extrapolations must be able to accurately reconstruct their respective wavefields at the target reflector.

Journal ArticleDOI
Isabelle Lecomte1
TL;DR: In this article, the authors proposed a method to estimate the thickness of the reflected/diffracted energy at each considered location in depth to improve the expected quality of PSDM images.
Abstract: Prestack depth migration (PSDM) should be the ultimate goal of seismic processing, producing angle-dependant depth images of the subsurface reflectivity. But the expected quality of PSDM images is constrained by many factors. Understanding all of these factors is necessary to improve depth imaging of geologic structures. In all PSDM approaches, e.g., Kirchhoff or wave-equation, migration always includes compensating for wave propagation in the overburden (back propagation, downward continuation, etc.), before focusing back the reflected/diffracted energy at each considered location in depth (imaging). Ideally, we would like to retrieve the reflectivity of the ground as detailed as possible to invert for the elastic parameters. But the waves perceive the reflectivity through “thick glasses,” seeing blurred structures, and not necessarily all of them, depending on the illumination. Only a filtered version of the true reflectivity is therefore retrieved. Being able to estimate these filters, the so-called re...

Journal ArticleDOI
TL;DR: In this article, a physical interpretation of deconvolution interferometry based on scattering theory is presented, where the free-point or clamped-point boundary condition is circumvented by separating the reference waves from scattered wavefields.
Abstract: Interferometry allows for synthesis of data recorded at any two receivers into waves that propagate between these receivers as if one of them behaves as a source. This is accomplished typically by crosscorrelations. Based on perturbation theory and representation theorems, we show that interferometry also can be done by deconvolutions for arbitrary media and multidimensional experiments. This is important for interferometry applications in which (1) excitation is a complicated source-time function and/or (2) when wavefield separation methods are used along with interferometry to retrieve specific arrivals. Unlike using crosscorrelations, this method yields only causal scattered waves that propagate between the receivers. We offer a physical interpretation of deconvolution interferometry based on scattering theory. Here we show that deconvolution interferometry in acoustic media imposes an extra boundary condition, which we refer to as the free-point or clamped-point boundary condition, depending on the measured field quantity. This boundary condition generates so-called free-point scattering interactions, which are described in detail. The extra boundary condition and its associated artifacts can be circumvented by separating the reference waves from scattered wavefields prior to interferometry. Three wavefield-separation methods that can be used in interferometry are direct-wave interferometry, dual-field interferometry, and shot-domain separation. Each has different objectives and requirements.

Journal ArticleDOI
TL;DR: In this article, the authors assess various approaches for inverting and interpreting time-lapse electrical resistivity tomography (ERT) data and determine that the first approach is useful to proceed beyond straightforward inversion of data differences and take advantage of thetime-lapsenature-of-differentata.
Abstract: Time-lapse electrical resistivity tomography ERT has many practical applications to the study of subsurface properties and processes. When inverting time-lapse ERT data, it is useful to proceed beyond straightforward inversion of data differences andtakeadvantageofthetime-lapsenatureofthedata.Weassess various approaches for inverting and interpreting time-lapse ERTdataanddeterminethattwoapproachesworkwell.Thefirst approachismodelsubtractionafterseparateinversionofthedata from two time periods, and the second approach is to use the inverted model from a base data set as the reference model or prior information for subsequent time periods. We prefer this second approach. Data inversion methodology should be considered when designing data acquisition; i.e., to utilize the second approach, it is important to collect one or more data sets for which the bulk of the subsurface is in a background or relatively unperturbed state.Athird and commonly used approach to time-lapse inversion,invertingthedifferencebetweentwodatasets,localizes the regions of the model in which change has occurred; however, varying noise levels between the two data sets can be problematic. To further assess the various time-lapse inversion approaches,weacquiredfielddatafromacatchmentwithintheDry Creek Experimental Watershed near Boise, Idaho, U.S.A. We combined the complimentary information from individual static ERTinversions,time-lapseERTimages,andavailablehydrologicdatainarobustinterpretationschemetoaidinquantifyingseasonalvariationsinsubsurfacemoisturecontent.

Journal ArticleDOI
TL;DR: This work presents a technique in which two or more shots are acquired during the time it normally takes to acquire one shot, and demonstrates that, in deep water with modest water-bottom reflectivity, no special processing is required, whereas in shallower water with stronger water- Bottom Reflectivity, the use of shot-separation techniques is necessary.
Abstract: We present a technique in which two or more shots are acquired during the time it normally takes to acquire one shot. The two (or more) shots are fired in a near simultaneous manner with small random time delays between the component sources. A variety of processing techniques are applied to produce the same seismic images which would have resulted from firing the simultaneous shots separately. These processing techniques rely on coherency of the wavefield in the common-shot domain and unpredictability in the common-receiver, common-offset, and common-midpoint domains. We present results of its application on synthetic 2D, real 2D, and real 3D data from the Gulf of Mexico. These results demonstrate that, in deep water with modest water-bottom reflectivity, no special processing is required, whereas in shallower water with stronger water-bottom reflectivity, the use of shot-separation techniques is necessary. We conclude that this technique can be used robustly to improve source sampling and, for example, ...

Journal ArticleDOI
Craig J. Beasley1
TL;DR: This article discusses a field experiment carried out to test the feasibility of employing marine sources activated simultaneously, which does not require source-signature encoding, but relies on spatial-source positioning to allow for separation of the signa...
Abstract: Cost is one of the fundamental factors that determines where and how a seismic survey will be conducted. Moreover, the cost of 3D seismic acquisition and processing often plays a significant role in determining whether or not a prospect is economic. Unit costs of seismic data acquisition and processing have dropped dramatically as the technology has matured; however, these economies have raised demand for larger and more complex acquisition plans. More than ever, there is a great need to gain efficiency. In this article, I discuss a field experiment carried out to test the feasibility of employing marine sources activated simultaneously. Simultaneous source firing has long been recognized as a possible strategy for achieving dramatic cost reductions in seismic data acquisition. This approach is novel in that it does not require source-signature encoding (although such encoding combined with this approach is beneficial), but, rather, relies on spatial-source positioning to allow for separation of the signa...

Journal ArticleDOI
TL;DR: A modification of the typical minimum-structure inver-sion algorithm is presented that generates blocky, piecewise-constant earth models that are often more consistent with the authors' real or perceived knowledge of the subsurface than the fuzzy, smeared-out models produced by current minimum-Structure inversions.
Abstract: A modification of the typical minimum-structure inver-sion algorithm is presented that generates blocky, piecewise-constant earth models. Such models are often more consistent with our real or perceived knowledge of the subsurface than the fuzzy, smeared-out models produced by current minimum-structure inversions. The modified algorithm uses l1 -type measures in the measure of model structure instead of the traditional sum-of-squares, or l2 , measure. An iteratively reweighted least-squares procedure is used to deal with the nonlinearity introduced by the non- l2 measure. Also, and of note here, diagonal finite differences are included in the measure of model structure. This enables dipping interfaces to be formed. The modified algorithm retains the benefits of the minimum-structure style of inversion — namely, reliability, robustness, and minimal artifacts in the constructed model. Two examples are given: the 2D inversion of synthetic magnetotelluric data and the 3D inversion of gravity data from the Ovo...

Journal ArticleDOI
TL;DR: In this paper, the authors presented the best fitting of induced-polarization (IP) spectra by different models of Cole-Cole type evidences discrepancies in the resulting model parameters.
Abstract: Best fitting of induced-polarization (IP) spectra by different models of Cole-Cole type evidences discrepancies in the resulting model parameters. The time constant determined from the same data could vary in magnitude over several decades. This effect, which makes an evaluation of the results of different models nearly impossible, is demonstrated by induced polarization measurements in the frequency range between 1.4 mHz and 12 kHz on thirteen mixtures of quartz sand and slag grains. The samples differ in size and the amount of the slag grains. Parameters describing the IP spectra are derived by fitting models of the Cole-Cole type to the measured data. The fitting quality of the generalized Cole-Cole model, the standard Cole-Cole model, and the Cole-Davidson model is investigated. The parameters derived from these models are compared and correlated with mass percentage and grain size of the slag particles. An alternative fittingapproach is introduced, using the decomposition of observed IP spectra into ...

Journal ArticleDOI
TL;DR: It is shown that exploiting the curvelet's ability to sparsify wavefrontlike features is powerful, and the results are a clear indication of the broad applicability of this transform to exploration seismology.
Abstract: Mitigating missing data, multiples, and erroneous migration amplitudes are key factors that determine image quality. Curvelets, little “plane waves,” complete with oscillations in one direction and smoothness in the other directions, sparsify a property we leverage explicitly with sparsity promotion. With this principle, we recover seismic data with high fidelity from a small subset (20%) of randomly selected traces. Similarly, sparsity leads to a natural decorrelation and hence to a robust curvelet-domain primary-multiple separation for North Sea data. Finally, sparsity helps to recover migration amplitudes from noisy data. With these examples, we show that exploiting the curvelet's ability to sparsify wavefrontlike features is powerful, and our results are a clear indication of the broad applicability of this transform to exploration seismology.

Journal ArticleDOI
TL;DR: In this article, the authors show that particle velocity measurements can increase the effective Nyquist wavenumber by a factor of two or three, depending on how they are used and that conventional workflows aimed at reducing these aliasing effects, such as moveout correction applied before interpolation, are compatible with multicomponent measurements.
Abstract: Three-component measurements of particle motion would bring significant benefits to towed-marine seismic data if processed in conjunction with the pressure data. We show that particle velocity measurements can increase the effective Nyquist wavenumber by a factor of two or three, depending on how they are used. A true multicomponent streamer would enable accurate data reconstruction in the crossline direction with cable separations for which pressure-only data would be irrecoverably aliased. We also show that conventional workflows aimed at reducing these aliasing effects, such as moveout correction applied before interpolation, are compatible with multicomponent measurements. Some benefits of velocity measurements for deghosting data are well known. We outline how the new measurements might be used to address some long-standing deghosting challenges of particular interest. Specifically, we propose methods for recovering de-ghosted data between streamers and for 3D deghosting of seismic data at the stream...

Journal ArticleDOI
TL;DR: In this paper, the coherency, azimuth, and dip attributes and a gray-level co-occurrence matrix GLCM method were used to compute the texture-based energy, entropy, homogeneity, and contrast attributes.
Abstract: Three-dimensional ground-penetrating radar GPR data are routinely acquired for diverse geologic, hydrogeologic, archeological, and civil engineering purposes. Interpretations of these dataareinvariablybasedonsubjectiveanalysesofreflectionpatterns. Such analyses are heavily dependent on interpreter expertiseandexperience.UsingdataacquiredacrossgravelunitsoverlyingtheAlpineFaultZoneinNewZealand,wedemonstratethe utilityofvariousgeometricattributesinreducingthesubjectivity of3DGPRdataanalysis.Weuseacoherence-basedtechniqueto compute the coherency, azimuth, and dip attributes and a graylevel co-occurrence matrixGLCMmethod to compute the texture-basedenergy,entropy,homogeneity,andcontrastattributes. A selection of the GPR attribute volumes allows us to highlight key aspects of the fault zone and observe important features not apparent in the standard images. This selection also provides information that improves our understanding of gravel deposition andtectonicstructuresatthestudysite.Anewdepositional/structuralmodellargelybasedontheresultsofouranalysisofGPRattributes includes four distinct gravel units deposited in three phases and a well-defined fault trace. This fault trace coincides with a zone of stratal disruption and shearing bound on one side by upward-tilted to synclinally folded stratified gravels and on the other side by moderately dipping stratified alluvial-fan gravelsthatcouldhavebeenaffectedbylateralfaultdrag.Whenused in tandem, the coherence- and texture-based attribute volumes can significantly improve the efficiency and quality of 3D GPR interpretation, especially for complex data collected across activefaultzones.

Journal ArticleDOI
TL;DR: In this article, the authors used a neural network to quantify the sensitivity of seismic attributes such as coherence, Sobel filter-based edge detectors, amplitude gradients, dip-azimuth, curvature, and gray-level co-occurrence matrix measures to seismic textures and morphology.
Abstract: Seismic attributes extract information from seismic reflection data that can be used for quantitative and qualitative interpretation. Attributes are used by geologists, geophysicists, and petrophysicists to map features from basin to reservoir scale. Some attributes, such as seismic amplitude, envelope, rms amplitude, spectral magnitude, acoustic impedance, elastic impedance, and AVO are directly sensitive to changes in seismic impedance. Other attributes such as peak-to-trough thickness, peak frequency, and bandwidth are sensitive to layer thicknesses. Both classes of attributes can be quantitatively correlated to well control using multivariate analysis, geostatistics, or neural networks. Seismic attributes such as coherence, Sobel filter-based edge detectors, amplitude gradients, dip-azimuth, curvature, and gray-level co-occurrence matrix measures are directly sensitive to seismic textures and morphology. Geologic models of deposition and structural deformation coupled with seismic stratigraphy princip...

Journal ArticleDOI
TL;DR: In this paper, the authors used satellite geodetic data for reservoir monitoring and characterization using satellite geodesic data from the Krechba field, Algeria, with the aim to identify the most suitable reservoir locations.
Abstract: Reservoir monitoring and characterization using satellite geodetic data: Interferometric Synthetic Aperture Radar observations from the Krechba field, Algeria D.W.Vasco 1 , Alessandro Ferretti 2 , and Fabrizio Novali 3 Earth Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 Tele-Rilevamento Europa,T.R.E.s.r.l.,Via Vittoria Colonna,7-20149 Milano, Italy Tele-Rilevamento Europa,T.R.E.s.r.l.,Via Vittoria Colonna,7-20149 Milano, Italy

Journal ArticleDOI
TL;DR: In this paper, the authors introduce seismic interferometry of passive data by multidimensional deconvolution (MDD) as an alternative to the cross-correlation method, which can correct for the effects of source irregularity, assuming the first arrival can be separated from the full response.
Abstract: We introduce seismic interferometry of passive data by multidimensional deconvolution (MDD) as an alternative to the crosscorrelation method. Interferometry by MDD has the potential to correct for the effects of source irregularity, assuming the first arrival can be separated from the full response. MDD applications can range from reservoir imaging using microseismicity to crustal imaging with teleseismic data.

Journal ArticleDOI
TL;DR: In this article, the authors used elastic wave velocity measurements in the laboratory to assess the evolution of the microstructure of shales under triaxial stresses, which are representative of in situ conditions.
Abstract: Elastic wave velocity measurements in the laboratory are used to assess the evolution of the microstructure of shales under triaxial stresses, which are representative of in situ conditions. Microstructural parameters such as crack aperture are of primary importance when permeability is a concern. The purpose of these experiments is to understand the micromechanical behavior of the Callovo-Oxfordian shale in response to external perturbations. The available experimental setup allows for the continuous, simultaneous measurement of five independent elastic wave velocities and two directions of strain (axial and circumferential), performed on the same cylindrical rock sample during deformation in an axisymmetric triaxial cell. The main results are (1) identification of the complete tensor of elastic moduli of the transversely isotropic shales using elastic wave velocity measurements, (2) assessment of the evolution of these moduli under triaxial loading, and (3) assessment of the evolution of the elastic ani...

Journal ArticleDOI
TL;DR: In this paper, the perfectly matched layer (PML) absorbing technique has become popular in numerical modeling in elastic or poroelastic media because of its efficiency in absorbing waves at nongrazing incidence, but after numerical discretization, large spurious oscillations are sent back from the PML into the main domain.
Abstract: The perfectly matched layer (PML) absorbing technique has become popular in numerical modeling in elastic or poroelastic media because of its efficiency in absorbing waves at nongrazing incidence. However, after numerical discretization, at grazing incidence, large spurious oscillations are sent back from the PML into the main domain. The PML then becomes less efficient when sources are located close to the edge of the truncated physical domain under study, for thin slices or for receivers located at a large offset. We develop a PML improved at grazing incidence for the poroelastic wave equation based on an unsplit convolutional formulation of the equation as a first-order system in velocity and stress. We show its efficiency for both nondissipative and dissipative Biot porous models based on a fourth-order staggered finite-difference method used in a thin mesh slice. The results obtained are improved significantly compared with those obtained with the classical PML.

Journal ArticleDOI
TL;DR: In this article, a 3D frequency-domain FD acoustic fullwaveforminversion (FWFI) was applied to the channel and thrust system of the 3D SEG/EAGEoverthrust model.
Abstract: We assessed 3D frequency-domain FD acoustic fullwaveforminversionFWIdataasatooltodevelophigh-resolution velocity models from low-frequency global-offset data. The inverse problem was posed as a classic leastsquares optimization problem solved with a steepest-descent method. Inversion was applied to a few discrete frequencies, allowing management of a limited subset of the 3D data volume. The forward problem was solved with a finite-difference frequency-domain method based on a massively paralleldirectsolver,allowingefficientmultiple-shotsimulations. The inversion code was fully parallelized for distributedmemory platforms, taking advantage of a domain decomposition of the modeled wavefields performed by the direct solver. After validation on simple synthetic tests, FWI was applied to two targets channel and thrust system of the 3D SEG/EAGEoverthrustmodel,correspondingto3Ddomains of 78.752.25 km and 13.513.54.65 km, respectively. The maximum inverted frequencies are 15 and 7 Hz for the two applications.Amaximum of 30 dual-core biprocessor nodes with 8 GB of shared memory per node were used for the second target. The main structures were imaged successfully at a resolution scale consistent with the inverted frequencies. Our study confirms the feasibility of 3D frequency-domain FWI of global-offset data on large distributed-memory platforms to develop high-resolution velocity models. These high-velocity models may provide accurate macromodelsforwave-equationprestackdepthmigration.

Journal ArticleDOI
TL;DR: In this paper, deconvolution is used to recover the impulse response between two receivers without the need for an independent estimate of the source function, which is of most use to seismic-while-drilling (SWD) applications in which pilot records are absent or provide unreliable estimates of bit excitation.
Abstract: Deconvolution interferometry successfully recovers the impulse response between two receivers without the need for an independent estimate of the source function. Here we extend the method of interferometry by deconvolution to multicomponent data in elastic media. As in the acoustic case, elastic deconvolution interferometry retrieves only causal scattered waves that propagate between two receivers as if one acts as a pseudosource of the point-force type. Interferometry by deconvolution in elastic media also generates artifacts because of a clamped-point boundary condition imposed by the deconvolution process. In seismic-while-drilling (SWD) practice, the goal is to determine the subsurface impulse response from drill-bit noise records. Most SWD technologies rely on pilot sensors and/or models to predict the drill-bit source function, whose imprint is then removed from the data. Interferometry by deconvolution is of most use to SWD applications in which pilot records are absent or provide unreliable estimates of bit excitation. With a numerical SWD subsalt example, we show that deconvolution interferometry provides an image of the subsurface that cannot be obtained by correlations without an estimate of the source autocorrelation. Finally, we test the use of deconvolution interferometry in processing SWD field data acquired at the San Andreas Fault Observatory at Depth (SAFOD). Because no pilot records were available for these data, deconvolution outperforms correlation in obtaining an interferometric image of the San Andreas fault zone at depth.