scispace - formally typeset
Search or ask a question

Showing papers in "First Break in 2004"


Journal ArticleDOI
TL;DR: In this paper, an integral part of quantitative seismic data analysis and is fundamental for fluid-and lithology-substitution, for AVO modelling, and for interpretation of elastic inversion results.
Abstract: Rock physics is an integral part of quantitative seismic data analysis and is fundamental for fluid- and lithology-substitution, for AVO modelling, and for interpretation of elastic inversion results.

152 citations


Journal ArticleDOI
TL;DR: Inversion can be either deterministic or probabilistic and the approach can be post-and/or pre-stack as discussed by the authors, and the inversion schemes generally use migrated time data as basic input.
Abstract: The interest in seismic inversion techniques has been growing steadily over the last couple of years. Integrated studies are essential to hydrocarbon development projects (e.g. Vazquez et al. 1997, Cosentino 2001) and inversion is one of the means to extract additional information from seismic data. Various seismic inversion techniques are briefly presented. Inversion replaces the seismic signature by a blocky response, corresponding to acoustic and/or elastic impedance layering. It facilitates the interpretation of meaningful geological and petrophysical boundaries in the subsurface. Inversion increases the resolution of conventional seismics in many cases and puts the study of reservoir parameters at a different level. It results in optimised volumetrics, improved ranking of leads/ prospects, better delineation of drainage areas and identification of ‘sweet spots’ in field development studies (e.g. Veeken et al. 2002). The main steps in an inversion study are: ■ Quality control and pre-conditioning of the input data. ■ Well–to-seismic match, zero-phasing of data in zone of interest and extraction of the wavelet. ■ Running of the inversion algorithm with generation of acoustic or elastic impedance cubes and extraction of attributes. ■ Visualisation and interpretation of the results in terms of reservoir development. The inversion methods are either deterministic or probabilistic and the approach can be post- and/or pre-stack. Inversion schemes generally use migrated time data as basic input. The pre-stack method exploits AVO effects on migrated CDP gathers. There is a trade-off between method/cost/time and quality of inversion results. Feasibility studies with synthetic modelling are recommended before embarking on an inversion or AVO project (Da Silva et al., in prep.). The past track record has demonstrated the benefits of the seismic inversion method. However, it should be realised that the inversion procedure is not a unique process, i.e. there is no single solution to the given problem. Care should be taken when interpreting the inversion results. Adequate data preconditioning is a prerequisite for quantitative interpretation of the end results.

148 citations



Journal ArticleDOI
TL;DR: In this article, the authors proposed a new and efficient method for optimal aperture selection and migration in seismic imaging, where the strong-amplitude Fresnel apertures can be picked interactively and at least semi-automatically.
Abstract: We investigate possible improvements in seismic imaging. We discuss how the Fresnel zone relates to the migration aperture and introduce the concept of the Fresnel aperture, which is the direct time-domain equivalent, at the receivers’ surface, of the subsurface Fresnel zone. Through these concepts we propose a new and efficient method for optimal aperture selection and migration. For complex media, multipathing will occur and multiple Fresnel apertures can exist for a given image point. In practice, due to inaccuracies and smoothing of the background velocity macromodel, inaccuracies in the ray-tracing method used for Green’s function computations and possible noise corruption of the data, the true Fresnel apertures will, in many cases, be replaced by ‘false’ ones, with apparently new Fresnel apertures being added. Hence, contributions from these ‘false’ Fresnel apertures cause a noise-corrupted image of the subsurface. It is now assumed that the single scattered events are quite robust with respect to the above-mentioned distortions, and that their corresponding Fresnel apertures will remain essentially undistorted, with the strongest amplitudes. Based on this main assumption, we propose a method, analogous with velocity analysis, where the strong-amplitude Fresnel apertures can be picked interactively and at least semi-automatically. However, as in velocity analysis, a certain amount of user interaction has to be assumed. When this technique is combined with a prestack Kirchhoff-type depth-migration method, we call it Fresnel-aperture PSDM. This imaging method has been applied to data from both the Marmousi model and the North Sea. In both cases the improvements, when compared to conventional imaging, were considerable.

41 citations


Journal ArticleDOI
TL;DR: Tompkins et al. as mentioned in this paper contributed to the growing body of knowledge on interpretation issues for marine controlled-source electromagnetic imaging, now an established commercial technique for detecting hydrocarbon reservoirs.
Abstract: Michael J. Tompkins, senior research geophysicist, Offshore Hydrocarbons Mapping, Aberdeen, UK contributes to the growing body of knowledge on interpretation issues for marine controlled-source electromagnetic imaging, now an established commercial technique for detecting hydrocarbon reservoirs.

38 citations


Journal ArticleDOI
TL;DR: The role neural networks play in combining different seismic attributes and effectively bringing together data with the interpreter’s knowledge to decrease exploration risk in four categories (geometry, reservoir, charge and seal) is highlighted here.
Abstract: Fred Aminzadeh and Paul de Groot of dGB Earth Sciences begin a major series of three articles on the increasing use of soft computing techniques for E&P geoscience applications, focusing first on how neural networks can enhance seismic object detection. Soft computing has been used in many areas of petroleum exploration and development. With the recent publication of three books on the subject, it appears that soft computing is gaining popularity among geoscientists. In this paper we focus on one aspect of soft computing: neural networks, in qualitative and quantitative seismic object detection. In subsequent papers we will review other aspects of soft computing in exploration. Highlighted here will be the role neural networks play in combining different seismic attributes and effectively bringing together data with the interpreter’s knowledge to decrease exploration risk in four categories (geometry, reservoir, charge and seal). Three new books in the general area of soft computing applications in exploration and development, Wong et al (2002), Nikravesh et al (2003) and Sandham et al (2003) represent a comprehensive body of literature on recent applications of soft computing in exploration. Soft computing is comprised of neural networks, fuzzy logic, genetic computing, perception- based logic and recognition technology. Soft computing offers an excellent opportunity to address the following issues: ■ Integrating information from various sources with varying degrees of uncertainty ■ Establishing relationships between measurements and reservoir properties ■ Assigning risk factors or error bars to predictions. Deterministic model building and interpretation are increasingly replaced by stochastic and soft computing-based methods. The diversity of soft computing applications in oil field problems and the prevalence of their acceptance can be judged by the increasing interest among earth scientists and engineers. Given the broad scope of the topic, we will limit the discussion in this paper to neural network applications. In subsequent papers we will review other aspects of soft computing, such as fuzzy logic in exploration. Neural networks have been used extensively in the oil industry. Approximately 10 years after McCormack’s review (1991) of neural network applications in geophysics, much work has been done to bring such applications to the main stream of geophysical interpretation. Some of these efforts are documented in Wong et al (2002), Nikravesh et al (2003) and Sandham et al (2003) which include many papers and extensive references on neural network applications. Most of these applications have been in reservoir characterization, seismic object detection, creating pseudo logs, and log editing. In the next section, we will focus on two general areas of applications of neural networks. This will include qualitative methods with the main aim of examining seismic attributes to highlight certain seismic anomalies without having access to very much well information. In this case neural networks are primarily used for classification purposes. The second category involves quantitative methods where specific reservoir properties are quantified using both seismic data and well data, and neural networks serve as an integrator of the information.

36 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new approach by adding back an estimate of the signal removed during the signal modelling, rather than adding back a percentage of the original data, which greatly improves signal preservation, making quantitative AVO and rock properties analyses much more reliable.
Abstract: There is perhaps no greater frustration to the seismic interpreter than to have signal obscured by noise. This is a common occurrence and many noise attenuation algorithms have been developed to address it. Most methods attempt to separate the desirable signal from the undesired noise, usually by making use of some transform into a domain where the signal or noise is modelled mathematically, and signal and noise can be separated. Most historical noise suppression methods stop at separating noise and signal. That is, the signal model itself is the output of the noise attenuation program. Some methods go slightly further by adding back a percentage of the original input data. LIFT, Core Lab's new proprietary amplitudefriendly technique for attenuating noise and multiples, takes a new approach by adding back an estimate of the signal removed during the signal modelling, rather than adding back a percentage of the original data. This is a fundamental shift in noise suppression strategies. It is an approach that is very flexible in that it can incorporate a variety of application domains, filtering tools, and new technologies and ways of modelling data – including future technologies as they are developed. It is an approach that greatly improves signal preservation, making quantitative AVO and rock properties analyses much more reliable. Also it is a robust amplitudepreserving way to precondition data for prestack migration, avoiding migration artifacts and costly re-runs. The primary amplitudes after LIFT are trustworthy, making prestack migration with AVO now a realistic option.

28 citations


Journal ArticleDOI
TL;DR: In this paper, the relationship between various geological, structural and seismic attributes and fracture intensity is investigated and a good understanding of the spatial distribution of fracture intensity can be obtained by analyzing the fracture distribution.
Abstract: Fractured reservoir characterization requires a good understanding of the spatial distribution of fracture intensity. In practice, the relationships between various geological, structural and seismic attributes and fracture intensity are complex and can be highly nonlineair.

26 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe how, when marine 3D seismic surveys started in the late 1970s, field parameters and processing steps were designed based on signal theroy and physical parameters, which led to techniques that did not really follow the original designs, but gave impressive results rather quickly.
Abstract: When marine 3D seismic surveys started in the late 1970s, field parameters and processing steps were designed based on signal theroy and physical parameters. The need for larger data volumes and higher processing speed led to techniques that did not really follow the original designs, but gave impressive results rather quickly.

21 citations


Journal ArticleDOI
TL;DR: In this paper, a method for extracting the guided waves from common-shot gathers without disturbing the reflection signals was proposed, which can suppress the guided signals by dip filtering techniques, however, this kind of filtering causes serious distortion of the signal when the amplitude of guided signals is much stronger than that of reflection signals.
Abstract: Guided waves are the major source of coherent noise, either on terrestrial or on marine seismic data, because the signals they produce are much stronger in amplitude than the reflected ones. In marine contexts, these waves exhibit characteristics that depend on the water depth, on the geometry and on the material properties of the substrata. Guided waves - or ground roll - are dispersive, which constitutes their main property. This means that each frequency component of the wave travels at a different velocity, in the sense that smaller and larger wavelengths are respectively influenced by the seismic properties of the shallower and deeper parts of the media. Due to their linear moveout-versus-offset characteristics, it may be possible to suppress the guided waves by dip filtering techniques. Unfortunately, this kind of filtering causes serious distortion of the signal when the amplitude of guided waves is much stronger than that of reflection (Liu 1999). The method developed here consists of extracting the guided waves from common-shot gathers without disturbing the reflection signals.

16 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a method to improve the sub-basalt image and add confidence in the reservoir modelling by using model-based imaging together with other methods, such as Radon-based analysis and multilayered lava flows.
Abstract: Thick Mesozoic sediments under the basalt cover of the Deccan traps along the northwest coast of India are considered to be potential targets for hydrocarbon exploration together with the Indus basin of Pakistan. Sub-basalt seismic imaging is difficult in these formations, as the Deccan traps are composed of multilayered lava flows. Seismic imaging issues associated with high velocity basalt include: 1. Multiples generated in interbedded units of basalt and between the top of the basalt and the sea floor. 2. Energy scattering from and absorption by heterogeneities. 3. Wave mode conversion at the top of the basalt. Radon-based analysis can solve some of the difficulties related to multiples. The use of low frequency sources and multicomponent technology are other methods commonly suggested to improve sub-basalt imaging. The Deccan basalt area is very heterogeneous and has not been mapped effectively by high resolution seismic. At present, only standard streamer data are available in this area. Model-based imaging together with other methods can improve the sub-basalt image and add confidence in the reservoir modelling. Imaging challenges in this setting are discussed using examples from the Kutch basin on the northwest coast of India, where the primary reservoir (Bhuj formation) is overlain by thick basalt.

Journal ArticleDOI
TL;DR: In this article, a rational-rock-physics approach was used to map porosity in a large producing gas/oil reservoir using only stacked seismic data via inversion, and the results showed that the porosity trend can be used to identify the pore fluid from seismic data only, without using offset information.
Abstract: We use rock physics to map pore fluid and porosity from seismic data in a vertical section between two wells. First, well log data are used to establish an effective-medium model that links the impedance to pore fluid and porosity. Next, stacked seismic data are used to produce P-wave impedance inversion. Finally, the rock physics transform is applied to the impedance section to identify pore fluid and produce a porosity section. For decades, the main use of seismic data has been to delineate sedimentary bodies and tectonic features in the subsurface. The mission of exploring inside the geological body is a relatively recent development. Mapping porosity, lithology and other reservoir bulk properties inside the geological body has become possible due to the recent dramatic improvement in seismic acquisition, imaging and inversion quality, as well as the accompanying advances in rock physics. Rock physics provides transforms between a reservoir's elastic properties and its bulk properties and conditions, including porosity, lithology, pore fluid and pore pressure. Such transforms are known as trends. Trends are built from controlled experiments where both the elastic and bulk properties of rock are measured on the same samples under the same conditions. The most commonly used source of such experimental data in modern rock physics is the borehole measurement. For example, an empirical impedance–porosity trend developed from sonic, density and porosity curves can be applied to a seismic acoustic impedance volume in order to map porosity in 3D. However, it is always advantageous not only to find an empirical trend but also to understand the physical laws that determine the trend or, in other words, find an appropriate effective-medium model. Such rationalization of an empirical trend by effective-medium modelling generalizes the trend, determines the domains of its applicability, and thus reduces the risk of using the trend outside the immediate data range. Here, we illustrate the rational-rock-physics approach by mapping porosity in a large producing gas/oil reservoir. Well data are used to establish a transform from impedance to porosity, based on rock-physics theory. This transform is then applied to a vertical impedance section obtained from stacked seismic data via inversion. The reservoir under examination consists of relatively soft sands. As a result, the acoustic impedance of the gas-saturated sand is much lower than that of the oil- and water-saturated sand. This large impedance difference allows us to identify the pore fluid from P-wave data only, without using offset information. As a result, we map both pore fluid and porosity, using only stacked seismic data.

Journal ArticleDOI
TL;DR: Fairhead et al. as mentioned in this paper claimed that new refinements in resolution are providing increasing value for satellite-derived gravity data in hydrocarbon exploration, and proposed a new resolution method for the satellite data.
Abstract: J. Derek Fairhead, Chris M. Green and Kirsten M.U. Fletcher of GETECH claim new refinements in resolution are providing increasing value for satellite-derived gravity data in hydrocarbon exploration.

Journal ArticleDOI
TL;DR: The authors discusses new ways of monitoring production, where the response of the reservoir to production procedures can be calculated, possibly predicted, and even potentially controlled by feed back, which may offer enormous potential advantages that may be exploited.
Abstract: Recent theory and observations suggest that the fluid-saturated microcracks in hydrocarbon reservoirs (and most other in situ rocks) are so closely spaced that they are critical systems verging on failure by fracturing. As a result reservoirs are highly compliant and respond to small changes with ‘butterfly wings’ sensitivity. However, these phenomena cannot be imaged with conventional technology and the largest effects may be possible complications when recovering hydrocarbons. This critical behaviour leads to a New Geophysics, where the response to changes in fluid-saturated rock (during hydrocarbon production, for example) necessarily varies both spatially and temporally so that detailed measurements degrade with time from the moment they are made. This means that behaviour cannot be averaged. Consequently, many (perhaps most) standard oil-field procedures may not be wholly or strictly valid. The typical (and extraordinarily low) 30% recovery from most oil reservoirs is at least partly explained by the sensitivity of reservoirs and behaviour inexplicable in terms of conventional geophysics. The New Geophysics is the cause of at least some of the difficulties in standard oil-field procedures, but does offer enormous potential advantages that may be exploitable. This article discusses new ways of monitoring production, where the response of the reservoir to production procedures can be calculated, possibly predicted, and even potentially controlled by feed back.

Journal ArticleDOI
TL;DR: In this article, a 3D seismic survey was acquired in order to create a stratigraphic model, consistent with all available well control and matching the production history, and the ultimate goal was to locate undeveloped potential within the gas sands.
Abstract: Later, instead of neural networks, a different mathematical approach using cubic b-splines was utilized for the same purpose. The results were found to be similar, suggesting that apart from neural networks, the cubic b-splines could be used as a tool for tackling non-linearity in multi-attribute seismic analysis. AVO inversion for Lame parameters The target area is a Lower Cretaceous glauconite filled fluvial channel, deposited within an incised valley system. A 3D seismic survey was acquired in order to create a stratigraphic model, consistent with all available well control and matching the production history. The ultimate goal was to locate undeveloped potential within the gas sands. The field has been producing since the early 1980s and two of the earliest, most prolific producers have begun to water out. As the objective was stratigraphic in nature, the seismic data were processed with the objective of preserving relative amplitude relationships in the offset domain to allow for the use of AVO attribute analysis. AVO inversion for Lame parameters (λρ and µρ) has become a common practice as it serves to enhance identification of reservoir zones. Also, integration of AVO-derived attribute volumes with other non-AVO derived seismic attribute volumes can provide meaningful geologic information when tied back to well data and verified as correlating with rock properties. Computation of reservoir properties for determination of mathematical relationships between variables derived from well logs, for example, is usually done with non-linear multivariate determinant analysis using neural networks. This paper provides a case study of a 3D seismic survey in southern Alberta, Canada, where a probabilistic neural network solution was first employed on AVO attributes (Pruden, 2002, Chopra & Pruden, 2003). Using the gamma-ray, acoustic and bulk density log curves over the zone of interest, gamma-ray and bulk density inversions were derived from the 3D attribute volumes. This methodology was successful, in that two new drilling locations derived from this work encountered a new gas charged reservoir, that not only extended the life of the gas pool but added new reserves as well.

Journal ArticleDOI
TL;DR: In this article, a presentation given by the author at the 2001 EAGE meeting in Amsterdam has been used to stimulate the reader to understand what seismic data processing really does to the seismic data, and the author continued by inviting others to present their views at future EAGE meetings or in First Break.
Abstract: This paper is based on a presentation given by the author at the 2001 EAGE meeting in Amsterdam. At the end of his talk the author said: ‘If my presentation has stimulated your thinking on what seismic data processing really does to the seismic data, then I have succeeded in my efforts. Some of you may disagree with me, and that is fine. Hopefully your disagreementwill be constructive, and the net result will be an even better understanding of what stacking does to the data.’ He continued by inviting others to present their views at future EAGE meetings, or in First Break. We hope prospective authors will accept the challenge, and we look forward to publishing anyinteresting comments and discussion on this very important topic.

Journal ArticleDOI
TL;DR: In this article, it is shown that seismic waves propagating through the earth are attenuated and as these elastic waves travel deeper they lose energy, in contrast to spherical spreading, where energy is spread over a wider area, and reflection and transmission of energy at interfaces, where its redistribution occurs in the upward or downward directions.
Abstract: It is a common observation that seismic waves propagating through the earth are attenuated. As these elastic waves travel deeper they lose energy, in contrast to spherical spreading, where energy is spread over a wider area, and reflection and transmission of energy at interfaces, where its redistribution occurs in the upward or downward directions. This loss is frequency dependent: higher frequencies are absorbed more rapidly than lower frequencies, such that the highest frequency usually recovered on most seismic data is about 80 Hz. Moreover, absorption appears to vary with the lithology of the medium. The unconsolidated near-surface absorbs more energy than the underlying compact rocks. In the extreme case most of the energy may be absorbed in the first few hundred metres of the subsurface. It is therefore important to study absorption and to determine ways in which it can be detected in seismic data.


Journal ArticleDOI
TL;DR: Morice et al. as mentioned in this paper describe how new integrated seismic data processing techniques, combined with a rich borehole geophysics dataset, offer opportunities to significantly enhance 3D seismic preconditioning, prestack imaging and inversion to elastic properties.
Abstract: Steve Morice, product champion, well-driven seismic, WesternGeco, Jean-Claude Puech, borehole seismic co-ordinator, Schlumberger, Europe, CIS and Africa, and Scott Leaney, geophysics advisor, Schlumberger describe how new integrated seismic data processing techniques, combined with a rich borehole geophysics dataset, offer opportunities to significantly enhance 3D seismic preconditioning, prestack imaging and inversion to elastic properties. Meeting the demands for higher resolution, signal-tonoise ratio, positional accuracy and amplitude fidelity from 3D seismic data requires enhanced algorithms and techniques throughout the entire data processing sequence. Well data, in the form of wireline logs and vertical seismic profiles (VSPs), provide unique constraints on key seismic data processing parameters as well as a calibration of the 3D processing sequence in terms of wellto- seismic ties. New methods for the combined analysis of wireline logs, VSPs and prestack seismic data provide invaluable information about seismic velocities (P and S), anelastic attenuation (Q) factors, velocity anisotropy (of various symmetry axes) and multiples. The logs, VSPs and seismic data should be processed and analysed together - reconciling differences due to the basic geophysics of the various measurements, the range of resolution scales, and different source of errors and uncertainties - for a unified and self-consistent borehole and surface-seismic dataset. The derivation of seismic properties from well logs and VSPs is covered extensively in the geophysical literature (see Further Reading below). The purpose of this article is to describe and illustrate how these properties can be applied to address some of the major challenges in conventional 3D seismic data processing.

Journal ArticleDOI
TL;DR: In this paper, the authors analyzed the complex structuring associated with ramp-flat-ramp extensional master faults, as well as the strain pattern in the hanging-wall fault block of such faults, with the help of analogue Plaster of Paris models.
Abstract: The complex structuring associated with ramp-flat-ramp extensional master faults, as well as the strain pattern in the hanging-wall fault block of such faults, have been analysed with the help of analogue Plaster of Paris models. This analysis shows that the shallow-dipping master fault commonly develops several fault branches due to either asperity bifurcation or hanging-wall and footwall splaying, and that the deeper parts of steeper early generation faults are sometimes cut by younger faults. This generates a very complex fault pattern in the deeper part of the hanging-wall fault block. Based on the analogue models, the possibility of generating synthetic reflection seismograms by the use of simple modelling techniques (ray tracing and finite-difference models) has been investigated. It is concluded that the two methods produce significantly different results in terms of resolution and noise. The ray-tracing method resulted in a highly idealized image that can be viewed as the end product after an idealized processing sequence in that no noise, no multiples or other artefacts are incorporated. Furthermore, all reflections are in the correct position and are displayed with the correct amplitude in each case. Although the result can be used as a good reference for what can be obtained by simple seismic modelling, the weak aspect of this method is obviously the over-simplified and unrealistically ‘clean’ image presented. A more ‘realistic’ reflection seismic image is obtained in cases where the synthetic seismogram is generated by the finite-difference method. However, the structural features stand out with less clarity, and it is less likely that the interpreter of this section would be able to identify the two master fault planes or to distinguish the complex structural pattern in the deep part of the hanging-wall fault block.


Journal ArticleDOI
TL;DR: Paulsson et al. as mentioned in this paper proposed to record multi-component seismic data using receivers positioned deep in the earth, and closer to the target-zone, to overcome many of the limitations experienced by surface 3D seismic methods.
Abstract: Bjorn Paulsson, Martin Karrenbach, Paul Milligan, Alex Goertz, and Alan Hardin of Paulsson Geophysical Services, with John O'Brien and Don McGuire of Anadarko Petroleum Corporation explain why recording multi-component seismic data using receivers positioned deep in the earth, and closer to the target-zone, can overcome many of the limitations experienced by surface 3D seismic methods. Borehole seismic surveys, commonly known as Vertical Seismic Profiling (VSP), have been an industry standard technique for several decades. In the past, however, these data have been used primarily for check-shot type velocity surveys and for reflection mapping at the well location in a one-dimensional fashion. This 1D measurement can be extended to 2D by using one or more walk-away lines of surface source points. The 2D method works well enough for imaging simple layered stratigraphy, but in a complex reservoir a full 3D data acquisition and imaging solution needs to be pursued. Inserting seismic sensors deep into oil and gas wells, as shown in Figure 1, allows the recording of much higher frequencies as compared to placing sensors at the Earth’s surface. The reason for this is simple: seismic waves have to propagate only once through the weathered layer in a confined zone near the source. In contrast, during surface seismic surveys, waves must travel through the weathered layer twice. Each traversal of the weathered layer attenuates high frequencies much more than the low frequencies, thus reducing the image resolution. The frequency content of borehole seismic data is typically more than twice that of surface seismic data, which provides an increase in subsurface resolution. In addition to recording higher frequency data, borehole seismic sensors provide a number of other advantages: borehole seismic data typically achieve a much higher signal-tonoise ratio than surface seismic data. The combination of a quiet borehole environment and strong sensor coupling to the borehole wall enables such high signal-to-noise ratio. Surface geophones, on the other hand, are generally poorly coupled in weathered rock and exposed to cultural and environmental noise at the surface. Good sensor coupling in the borehole enables three-component (3C) seismic data to be recorded with high vector fidelity. This ultimately allows shear and converted-wave imaging as well as the determination of anisotropy by shear wave splitting analysis (see, e.g., Maultzsch, 2003). Combining P and S wave images allows for attribute inversions of rock properties, such as fluid content, pore pressure, stress direction and fracture patterns. O’Brien et al. (2004b) use time lapse borehole seismic to map changes in such critical attributes for production monitoring purposes. Another advantage of borehole seismic surveys is a favourable geometry to illuminate complex structures such as sub-salt targets, salt flanks or steeply dipping faults. The 3D image volume that can be generated from a large downhole seismic array data is shown in Figure 1. The typical 3D borehole seismic image volume is cone shaped with the top of the cone coincident with the top receiver in the borehole array. The size of the base of the cone is determined by the depth of the image volume and the offset of the sources.

Journal ArticleDOI
TL;DR: Sercel SN388 and 408UL systems are widely used by contractors and represent together about 750 000 channels and 600 central units as mentioned in this paper and have contributed to faster acquisition and better data quality.
Abstract: Denis Mougenot, chief geophysicist of Sercel, the French-based manufacturer of seismic acquisition systems, provides examples of how his company is tackling needed improvements in land seismic acquisition technology The geophysical industry has been struggling to achieve profitability for some time (IAGC, 2003) In such a market, some seismic acquisition system manufacturers realize that contractors will only spend their precious capital on equipment for which they can get a near term return This has led these manufacturers to focus their development effort on system improvements that enable contractors to collect more seismic data per day, in other words to be more productive This article describes some of these specific productivity enhancements Improvements are illustrated by the features in the Sercel SN388 system and by the revolutionary 408UL system introduced in late 1999 These two 24-bit recording systems are widely used by contractors and represent together about 750 000 channels and 600 central units Changes that have contributed to faster acquisition and better data quality are considered Improvements in seismic acquisition productivity have contributed to reducing the cost of seismic data and have helped the industry reduce the cost of finding and recovering oil and gas These improvements can give some insight into future evolutions It is well known that an oversupply of crews within the seismic contracting industry and the demand for lower prices in new contracts have both contributed to an understanding that lower data acquisition cost is a survival need for contractors At the same time geophysical demands have required higher fold as well as larger offsets and uniform azimuthal distributions Together this means that while demand for lower costs increases, the density and number of traces produced is also on the increase In this context of low cost per trace and high fold per survey, seismic crews must improve their productivity to be profitable Many factors can influence the productivity of seismic crews Many of these factors are not influenced by the acquisition system, for example, geophysical planning, logistical planning, personnel choices, contractual issues, HSE, local culture, oil company requirements, to name just a few But there are also many ways that the acquisition system can impact how much data is collected per day Productivity improvements can come from 1) shorter time to troubleshoot the line; 2) shorter lost time between records; 3) the ability to record more lines and channels; 4) a new paradigm for acquisition field equipment, the use of a Link between multiple single channel units rather than the old technique of a fixed number of channels in boxes and cables; 5) automated QC so that the observer doesn’t need to spend so much time visually inspecting records; 6) multi-fleet vibrator techniques that allow operating without a delay for vibrator move-up; 7) overlapping record techniques, such as Slip-Sweep that allow collection of multiple records simultaneously; 8) built-in redundancy to allow a system to continue operation even with the inevitable damaged cable; 9) high enough system uptime and reliability to allow 24-hour acquisition in some areas; 10) very low power operation to minimize battery handling and replacement; and 11) lighter weight field equipment All of these issues will be discussed further in this article

Journal ArticleDOI
TL;DR: A review of the latest refinements in the revitalised use of airborne magnetic gradiometers for the mining industry can be found in this article, where Scott Hogg et al. present several methods that rely specifically on measured, not calculated, gradient.
Abstract: Scott Hogg, whose company Scott Hogg & Associates, based in Toronto, Canada offers services to the exploration and airborne geophysical industry, reviews the latest refinements in the revitalised use of airborne magnetic gradiometers for the mining industry. The oil and gas exploration industry spurred the development of the first airborne magnetic gradiometers. The motivation was to use Euler’s equation with measured vertical gradient to calculate depth to magnetic source. Aeroservice introduced both a helicopter system and a fixed wing towed bird system in the 1960s. Interest in the Euler theory for magnetic analysis was rekindled in the mining industry 20 years later and continues to be the foundation of many new interpretation methods. In Canada, the GSC developed a vertical magnetic gradiometer in the 1970s. A national mapping programme began in the early 1980s and in response Canadian airborne survey contractors created a variety of fixed-wing and helicopter systems. The mining sector’s interest in airborne gradient measurement was based primarily on the increased spatial resolution and detail: small anomalies on the flanks of large features could be clearly resolved. Calculated vertical gradient maps, produced by simple filtering of total field, were found to provide almost the same benefit as measured gradient, at less cost, and the commercial interest in measured vertical gradient faded. Geometrics introduced a horizontal gradiometer in 1983. This development was of particular significance since, at the same time, they incorporated a technique developed by Nabighian and Hansen to derive a pseudo total field residual from measured horizontal gradients. The demise of the Geometric survey division took horizontal gradients off the horizon for a while. A decade later, this same concept was used by Nelson of the NRC, and more recently implemented in a variety of forms by De Beers and others. Geodass in Botswana, now Fugro, introduced the first 3- axis gradiometer in the early 1990s. In conjunction with the gradient measurements it provided a variety of compilation and mapping services that made use of the information. In Canada, Terraquest was the first to provide horizontal gradient measurement and Goldak the first to provide a full 3-axis fixed-wing gradiometer. At present, almost all of the airborne contractors offer horizontal gradient systems, and several can now provide full 3-axis configurations for simultaneous vertical and horizontal gradient measurement. Considerable interest in magnetic gradient measurement has arisen over the past few years. Some of the benefits of gradient are well founded and some are overstated and many are poorly understood. Magnetic gradient measurements can be used to advantage in interpretation. Vertical gradient maps, analytic signal maps, and a host of Euler based methods all use gradient information. The gradient information for these purposes may be calculated or measured. This review addresses methods that rely specifically on measured, not calculated, gradient. At present there are three such primary applications. The first is the potential to avoid diurnal interference, the second is to correct total field for variations in aircraft altitude, and the third is to make significant improvements in the accuracy and resolution of magnetic maps.

Journal ArticleDOI
S. Wilson, R. Jones, W. Wason, D. Raymer, P. Jaques 
TL;DR: Wilson et al. as mentioned in this paper describe how passive seismic monitoring technology is making its way into the mainstream as a value proposition for the management of hydrocarbon resources and the use of 4D seismic is now established.
Abstract: Stephen Wilson, Rob Jones, Will Wason, Daniel Raymer and Paul Jaques of Vetco Gray, Cornwall, UK (formerly part of ABB) describe how passive seismic monitoring technology is making its way into the mainstream as a value proposition for the management of hydrocarbon resources. The use of 4D seismic as a mainstream technology in the management of hydrocarbon reservoirs is now established. In contrast to the traditional perception of seismic technology as an exploration tool, the value of 4D seismic sits securely on the production side of oilfield technology. This shift in emphasis within the seismic industry to encompass both production and exploration work has recently taken on a higher profile as a result of the difficulty of increasing reserves purely by exploration. New production technology now offers an alternative path to increasing booked reserves. The widening scope of seismic applications and the increasing number of reservoir geophysicists is helping to bring forward another seismic technology capable of greatly improving our understanding of reservoir dynamics. That technology is passive seismic monitoring. During the past few years the implementation of passive seismic monitoring as a mainstream technology for the management of hydrocarbon resources has been gathering pace. Recent permanent passive seismic studies in Oman have shown the capabilities of this technology to provide information upon which reservoir management decisions can be made (Jones et al, 2004). Knowledge of the existence and capabilities of the technology within our industry is reaching a critical mass and the technological barriers to its uptake are disappearing. Perhaps the most critical of these barriers concerns the ability to monitor microseismic activity from within active wells during production or injection. Recent developments in downhole tool technology allow the deployment of downhole seismic sensors capable of a 30-40 db improvement in signal performance when compared with previous technologies (Jaques et al., 2003). In addition to improvements in tool technology, software applications capable of delivering automatic microseismic locations to the client’s desktop in real-time are now available (Jones and Wason, 2004). The advent of 4D has improved our ability to observe reservoir performance and make timely decisions about reservoir operations. The deployment of permanent ocean bottom systems provides scope for improving the speed with which reservoir management decisions can be made by reducing turnaround time. Passive seismic monitoring further supports this improved decision-making capability by delivering real-time information about the reservoir to the desktop within minutes. In terms of instrumentation, permanent downhole seismic sensors represent the cornerstone for the implementation of full-field continuous passive seismic monitoring. The prospect of permanent downhole seismic sensors for use during 4D studies offers the prospect of accurate well ties, wavelet characterisation and VSP on demand. The combined value proposition for passive seismic monitoring and 4D seismic using downhole instrumentation may now be sufficient to drive the deployment of these systems. Figure 1 illustrates this virtuous circle of 4D seismic promoting the take up of microseismic technology which in turn helps 4D to better resolve reservoir change. All of which is driven by the value proposition of new technology as a method of improving recovery, increasing NPV and booking reserves.


Journal ArticleDOI
G. Larson, David Gray, S. Cheadle, G. Soule, Y. Zheng 
TL;DR: Todorovic-Marinic et al. as discussed by the authors proposed a new seismic attribute in identifying productive vertically aligned fractures, cracks or micro-cracks in gas reservoirs using surface seismic data.
Abstract: In this Canadian case study Dragana Todorovic-Marinic, Glenn Larson, David Gray, Scott Cheadle, Greg Soule and Ye Zheng report on their progress with a new seismic attribute in identifying productive vertically aligned fractures, cracks or micro-cracks in gas reservoirs using surface seismic data. Fractures are of great interest for hydrocarbon production. They can either hurt or help production depending on the nature of the reservoir being explored, so knowledge of their distribution and orientation can be critical to exploration success. Vertically aligned fractures, cracks or micro-cracks are known causes of Horizontal Transverse Isotropy (HTI). This type of anisotropy often has a horizontal axis aligned with open vertical fracturing that trends parallel to the maximum horizontal stress and normal to the minimum horizontal stress. It is widely recognised (e.g. Hall et al, 2000; Gray et al, 2002) that HTI anisotropy has a strong effect on the seismic amplitude. This can be measured by fitting the parameters of the Pwave Amplitude Versus Angle and Azimuth (AVAZ) equation of Ruger (1996) to surface seismic data. The outputs are seismic attributes that contain information that may be relevant to the fracturing. The P-wave reflectivity is the response of the rock to compression by the seismic wave and provides information on the rock’s lithology and fluid content. The Swave reflectivity is the response of the rock to shearing by the seismic wave and is comprised primarily of information about the lithology. The anisotropic gradient describes the variations of the AVO gradient with azimuth and is related to the crack density, i.e. to the magnitude of the differential horizontal permeability (Lynn et al, 1996). The azimuth of the anisotropic gradient is the orientation of the symmetry axis of an HTI medium. To the extent that a reservoir with vertical open fractures represents an HTI medium, the azimuth of the anisotropic gradient indicates the orientation of the fractures.


Journal ArticleDOI
TL;DR: Bland et al. as mentioned in this paper presented a technique that optimises the use of seismic data through manipulation of the image using a geological rule base, which can be readily used in routine interpretation and save time by quickly focusing effort on fruitful interpretational models and by increasing confidence in picking in poor data areas and in complex structure.
Abstract: Stuart Bland, Paul Griffiths and Dan Hodge of Midland Valley Exploration, Glasgow, Scotland discuss a new conceptual model for understanding the development of structures. This paper presents a technique that optimises the use of seismic data through manipulation of the image using a geological rule base. The approach can be readily used in routine interpretation and saves time by quickly focusing effort on fruitful interpretational models and by increasing confidence in picking in poor data areas and in complex structure. Seismic imaging is a primary source of information used in the exploration of hydrocarbons. Analogies have been drawn between the uses of seismic in exploration and production (E&P) and that of medical imaging in healthcare. There are similarities in the core functions of the seismic interpreter and the radiologist: both rely on high resolution images as 2D sections or 3D models to reveal what can’t be observed directly. In both disciplines the key aims are to note salient features in order to produce an accurate diagnosis of the situation and to advise others of their conclusions. Neither driller nor surgeon will appreciate surprises and will expect to have been appraised of critical factors and potential risks. The surgeon is interested in the location and most efficient route. Likewise the drilling engineer in the hydrocarbon accumulation needs to define the target. However, in both scenarios, whatever leading-edge technology is applied, the outcome is dependent on the interpretation of the data that, until drilling or surgery, remains an estimation of reality. At a fundamental level in hydrocarbon exploration, the interpretation of data is the product of a continual stream of decisions - ‘What does the horizon look like?’, ‘Can it be correlated across faults?’, ‘Where do the faults terminate?’, ‘Are the faults linked?’, ‘Is the horizon folded?’ One technique available to help the geophysicist is to flatten the seismic on key marker horizons. This is the digital version of the interpreter taking a folded paper section and overlaying one part on another to check character and correlation. Taking this technique a stage further we can use it to mimic simple deformations where flat-layered rocks become folded or faulted. Since horizons are both spatial and temporal objects – they are defined by geometry and age - horizon flattening can reveal significant features present at a particular time. Unfortunately this process has a number of drawbacks that require the interpreter to overlook distortions in the image, artefacts of the flattening process. These artefacts can arise where the horizon is interpolated across a fault or more generally because the flattening does not replicate the deformation observed in the section. These artefacts can significantly mislead the interpreter if not recognised. Where the medical doctor can refer to records to gain an insight into the patient’s medical history, the geologist can restore the section to understand its evolution. By using structural restoration to sequentially remove the effects of sediment compaction, isostatic adjustment faulting and fault-related folding that have altered the present-day section since deposition, we have a geologically valid way of looking at the history of the development of our structure while referencing the seismic image of the present day. Structural validation aids the decision process between alternative interpretations by testing the results within the framework of our understanding of geological history and evolution. Inclusion of the seismic enables validation of the geohistory within the context of the data. Three case studies are presented to illustrate the techniques involved in restoring the seismic image and the bene- fits from adopting this approach. Each case study has a distinctive setting, characteristic, key issues and associated risks. The first example is set within an extensional fault system of the Gullfaks, northern North Sea and depicts an untested interpretation. The second, an inverted series of half Grabens in the southern North Sea, typifies the problem of degrading seismic quality at depth. The final case study is taken from a Foreland Thrust basin in the Alberta Foothills, Canada. Each example demonstrates an enhanced level of detail and reduced risk of error in the final interpretation from apparently simple structures.

Journal ArticleDOI
TL;DR: In this article, a 3D seismic survey was carried out in the study area with one of the objectives being to map the lateral extent of thin pay sands exactly, and the seismic attributes extracted from the 3D data volume corresponding to unit 12 were utilized to generate a net thickness sand map of this unit using an artificial neural network technique.
Abstract: Multilayered Hazad sands of Middle Eocene age deposited in a deltaic environment are the main hydrocarbon producers in the south Cambay basin, India. These sands are broadly divided into 12 units (1 to 12) from bottom to top. These individual sand units are further subdivided into smaller subunits which are selectively charged and produce hydrocarbons in different parts of the south Cambay Basin. The study area covers a part of the Gandhar oilfield situated in the Broach-Jambusar block of the basin where subunits 3A and 12A have produced oil and gas in commercial quantities. Due to its widespread deposition and greater thickness, unit 3A was delineated and developed, based on conventional interpretation of 2D seismic data. The development strategy for unit 12A could not be completely resolved in the absence of a precise sand geometry map. A 3D seismic survey was carried out in the study area with one of the objectives being to map the lateral extent of thin pay sands exactly. According to the well data, the thickness of unit 12, consisting of subunits 12A, B and C, varies from 0 to 18 m, and wherever unit 12 is thicker, subunit 12A is present. Unit 12 is not resolved in the available seismic data but its seismic response is detectable. Synthetic seismic modelling has been carried out for a better understanding of the seismic response and for precise calibration. The seismic attributes extracted from the 3D data volume corresponding to unit 12 are utilized to generate a net thickness sand map of this unit using an artificial neural network technique. The sand geometry map of subunit 12A was prepared with the help of a net thickness map of unit 12 and well data. The thickness map of unit 12 helped in mapping the precise sand geometry of unit 12A and also an additional area for cost-effective exploration and development of this pay, which in turn improves the in-place reserves.