scispace - formally typeset
Search or ask a question

Showing papers in "First Break in 2018"


Journal ArticleDOI
TL;DR: In this article, the authors provide a detailed overview of the history of nodal systems and the implications of their use for seismic survey acquisition logistics and make some suggestions about where future developments should lie.
Abstract: Within the last two years six new land seismic nodal acquisition systems have been launched, a pace unmatched even during the oil boom of the late 2000s/early 2010s. Any acquisition system that utilizes recording instrumentation that does not incorporate cables is often referred to as a nodal system. Some instruments, however, are beginning to blur what initially appears to be a clear boundary. For example, the U-Node system from Seismic Instruments utilizes a node that records data from up to 24 different channels that can then be stored locally or transmitted via Wi-Fi to a central system. The reduction in cabling, which is usually cited as the core advantage of nodal recording, is therefore limited to the backbone connecting the central recording system to the field recording units. In this paper we will concentrate on nodes that are designed to record data from a single point and are thus typically limited to six or fewer channels. Over the history of nodes there have been six major categories developed (listed roughly in the order of their introduction): 1. Delayed data – Data is stored on the node and then transmitted after each shot, or stacked series of shots, is completed. 2. Remote-controlled – data is recorded internally but recording is initiated via radio messages. 3. Remote-synchronized – data is recorded continuously but the timing signal is issued via radio. 4. Real-time data – Data is transmitted in real-time. 5. Real-time QC – Status and or quality control data is transmitted in real-time. 6. Blind – the node records data internally and does not provide status or QC information in real-time. In this article we look at how nodal systems have evolved over the last 50 years. We begin with a historical overview starting from the early 1970s finishing in 2015. We then introduce the latest nodal systems and look at the implications of their use for seismic survey acquisition logistics. Finally, we will discuss the implications of these new systems and make some suggestions about where future developments should lie.

29 citations


Journal ArticleDOI
TL;DR: The first encounter of the Pre-Salt Aptian carbonate reservoirs (the Barra Velha Fm) in 2005, in the Paratifield, Santos Basin, was followed by additional discoveries such as the multi-billion barrel Lula (Tupi) field as discussed by the authors.
Abstract: The first encounter of the Pre-Salt Aptian ‘Microbialite’ carbonate reservoirs (the Barra Velha Fm) in 2005, in the Parati field, Santos Basin, was followed by additional discoveries such as the multi-billion barrel Lula (Tupi) field. Now, nearly 30 more discoveries have been made in the basin such as Libra and Sapinhoa, with recoverable reserves estimated as >30 BBOE, according to ANP. In addition, discoveries have been made in the adjacent Campos Basin, including the Pao de Acucar field (Viera de Luca et al., 2017), where the same unit is known as the Macabu Fm., and also in the Kwanza Basin, West Africa (Saller et al., 2016). After deposition in a late rift setting, the Barra Velha Fm and its equivalents were buried by more than 1 km of marine origin evaporites of the Ariri Fm and its equivalents, as the Albian Ocean seeped and poured into the basin.

27 citations


Journal ArticleDOI
TL;DR: In this article, the Maupasacq experiment, a large passive seismic survey, has been launched in the Mauleon basin, SW France, where a dense seismic network of 417 three-component (3C) sensors was deployed in an area of approximately 1500 km2.
Abstract: Passive Seismic is a broad term, incorporating various techniques and methodologies, which all exploit some part of the seismic signal that naturally exists or occurs in the Earth’s subsurface. This signal may differ significantly in the form and/or the provenance (e.g. earthquakes, ambient seismic noise, etc.), as well as the frequency content and, subsequently, the part of the subspace on which it may carry useful information. People involved in Passive Seismic often encounter the question: ‘What type of instrument is suitable for a passive seismic survey?’. Passive Seismic instrumentation usually consists of three-component seismic sensors, which mainly differ in the frequency range they are able to record (broad-band, short-period or geophone nodes). Having its roots in seismology, where traditionally broadband stations have been used for decades, but heading towards exploration, where instrumentation has to be cost-efficient and easy to handle in order to permit the adaptation at a reservoir scale, Passive Seismic instrumentation still tries to strike a balance between cost and bandwidth. Having in mind the variability of Passive Seismic methodologies and instruments, the Maupasacq experiment, a large passive seismic survey, has been launched in the Mauleon basin, SW France. The scope of the experiment was to image the area of interest by jointly applying a number of passive seismic methodologies, each one contributing to the final image with a different piece of useful information. The area of interest was carefully selected as, on one hand, the Mauleon basin consists of a former Cretaceous hyper-extended rift, inverted during pyrenean orogeny and, on the other hand, it provides a means of evaluating the results acquired, as an abundance of geological and geophysical data already exist in the area. In this context, a dense seismic network of 417 three-component (3C) sensors was deployed in an area of approximately 1500 km2. In addition to those, 24 peripheral stations have also been installed in an outer ring, extending the survey area to 3500 km2. The network was continuously operating for a recording period of six months (from April to October 2017) and consisted of three different types of seismic stations: 190 geophone nodes (SG- 10 3C SERCEL), 197 3C Seismotech short-period stations and 54 broadband stations (Guralp CMG40, Trillium Compact and Trillium 120). This fact, apart from imposing the difficulty of jointly processing data recorded by different types of instruments, having to deal with different instrument responses and data formats, it also permitted an evaluation of the suitability of each instrument type for each one of the passive seismic methodologies applied. This evaluation was performed in the course of an initial quality control (QC) procedure of the acquired dataset, permitting the extraction of valuable conclusions on the performance of different types of instruments, operating in the same area, during the same period of time. The main aspects that were evaluated were the acquired signal itself, as well as the frequency content of the recordings, in various circumstances (i.e. the occurrence of a local earthquake or a teleseismic event). Besides, it has been observed that the energy of a passive seismic source seems to play a big role in the definition of the ‘real’ recording limits of each type of instrument.

20 citations


Journal ArticleDOI
TL;DR: In this paper, a joint 3D inversion workflow incorporating the production field model as a structural reference is presented to derive mutually consistent subsurface resistivity, density and velocity distributions, as well as relocated MEQ events.
Abstract: Darajat is a vapor-dominated, producing geothermal field in West Java, Indonesia. Located along a range of Quaternary volcanic centres, it is associated with an eroded andesitic stratovolcano, and its reservoir is predominantly comprised of thick lava flows and intrusions in a stratovolcano central facies (Rejeki et al., 2010). First production from the field was started in 1994 with the installation of a 55 MW plant, and capacity was added in 2000 and 2007 to bring the total to 271 MW. Several ground geophysics data sets have been acquired during successive surveys – including gravity and magnetotelluric (MT) surveys, and continuing micro-earthquake (MEQ) monitoring (Soyer et al., 2017). While each survey was independently modelled and interpreted, a quantitatively integrated 3D inversion modelling study had not been undertaken. We present a joint 3D inversion workflow, incorporating the production field model as a structural reference in order to derive mutually consistent subsurface resistivity, density and velocity distributions, as well as relocated MEQ events.

18 citations


Journal ArticleDOI
TL;DR: In this article, the authors show that the ground-roll noise with low apparent velocity was partially attenuated by field arrays, and reflection events with high apparent velocity are strong and can be reliably identified.
Abstract: Modern land seismic data acquisition is moving from sparse grids of large source/receiver arrays to denser grids of smaller arrays or point-source, point-receiver systems. Large arrays were designed to attenuate ground-roll and backscattered noise and to increase overall signal-to-noise ratio (SNR). An example of a typical raw common-shot gather acquired using a legacy acquisition design with 72 geophones in a group and five vibrators per sweep is shown in Figure 1b. We can clearly see that the ground-roll noise with low apparent velocity was partially attenuated by field arrays, and reflection events with high apparent velocity are strong and can be reliably identified. Decreasing the size of field arrays during acquisition in arid environments leads to dramatic decrease in data SNR. An example of raw common-shot gather from a 2D line acquired using a single-sensor survey is shown in Figure 1a. In contrast to legacy data, the single-sensor data is dominated by noise caused by severe multiple scattering in complex near-surface layers and shows no apparent evidence of reflection signal. The sources and receivers were spaced at 10 m intervals in this recent 2D single-sensor survey. This sampling involves much denser acquisition compared to the conventional data using intervals of 30 m or more. Theoretically, high-density seismic acquisition better samples the entire wavefield (signal and noise) and is expected to result in improved imaging. Achieving this in practice with huge amounts of low SNR data proves to be very challenging. Conventional time processing tools such as surface-consistent scaling, deconvolution, static corrections, require reliable prestack signal in the data. Their application to modern seismic datasets acquired with small arrays often leads to unreliable results because the derived operators are based on noise and not on the expected signal. To extract the maximum value from dense high-channel acquisition, we need to enhance signal in the prestack data. Fortunately, densely sampled data gives us more flexibility than grouping geophones directly in the field.

18 citations


Journal ArticleDOI
TL;DR: In the offshore Douala/Kribi-Campo (DKC) and Rio Del Rey (RDR) basins, exploration has been relatively underexplored with only 23 wells drilled in an area larger than 10,000 km2 as discussed by the authors.
Abstract: The recently announced Cameroon licence round running until 29 June 2018 provides oil companies with a significant opportunity to acquire large swathes of acreage in the Douala/Kribi-Campo (DKC) and Rio Del Rey (RDR) Basins. The DKC Basin is divided into two sub-basins, the Douala Sub-basin in the north and the Kribi-Campo Sub-basin in the south. The RDR basin, situated at the toe of the Niger Delta (Figure 1), is a mature basin with significant infrastructure and production. In contrast, the DKC Basin, which is the focus of this paper, is relatively underexplored, yet there are marked grounds for optimism in its petroleum potential. The DKC Basin is separated from the RDR Basin by the Cameroon Volcanic Line (Figure 1) and is the northernmost basin formed during rifting and separation of the South Atlantic conjugate margins, a province harbouring prolific hydrocarbon accumulations. Exploration in the offshore Douala Sub-basin began in the 1960s but the first well drilled, Nyong Marine-1, was dry. It was not until the late 1970s, with the drilling of Sanaga Sud A-1 and the discovery of gas-condensate within the Aptian-Albian-aged Mundeck Formation, that exploration accelerated with the drilling of 11 discoveries in the Cretaceous, two with secondary Tertiary reservoirs. This exploration phase was primarily focused in the shallow water Kribi-Campo Sub-basin, targeting Cretaceous tilted fault-block plays, which also extend onshore, where oil is currently produced from the overlying Cretaceous Logbadjeck Formation in the Mvia Field (Figure 2). Despite the early success, production was not established until 1997 in the Ebome Field. Today, more than 1 million bbls of oil is produced annually from the Kribi-Campo Sub-basin (SNH Production Figures, 2016). The rest of the offshore DKC Basin is relatively underexplored with only 23 wells drilled in an area larger than 10,000 km2. Wells have targeted a range of reservoir intervals from the Miocene to Upper Cretaceous across a variety of stratigraphic, structural and combination traps (Figure 3). Upper Cretaceous reservoir-quality sands were penetrated by the hydrocarbon-bearing Sapele-1 well in the Etinde Exploration Block and Cheetah-1 in the Tilapia Block, although sands in the latter well were thinner than initially postulated. Strong oil shows were encountered in turbidite sands in well CM-1A in the Elombo Block. Beyond the shelf, deepwater wells Eboni-1 and Bamboo-1 (Elombo and Ntem Blocks) also targeted Upper Cretaceous sands but were water wet. Tertiary sands have been penetrated by a number of wells in the Tilapia Block, including Coco Marine-1 which flowed 34oAPI oil at a rate of 3000 bopd from the Paleocene. Oil, gas and condensate have all been recovered from various Miocene channel and fan sands across the basin, most notably from Noble’s successful YoYo discovery. This is an extension of the play behind the Yolanda-1 discovery and the producing Alen and Aseng Fields in Equatorial Guinea (Figure 2). Despite evidence for more than one working petroleum system (Tertiary and Cretaceous), hydrocarbons flowed to the surface in only a handful of wells outside the Kribi-Campo Sub-basin and the YoYo discovery looks on course to be put into production. The next phase of exploration in the Doula Sub-basin undoubtedly requires an understanding of the reasons behind this limited exploration drilling success. Previous operators have all suspected that there are challenges with source rock, timing of migration, and presence of reservoir, seal and trap but never consistently across all wells. Deciphering reasons for the apparent low well success requires a review of all exploration data in a regional geological context beyond the limits of individual blocks.

16 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss new data processing routines and methods to unravel the conductivity-depth-distribution, induced polarization and magnetic susceptibility, and joint interpretation with geochemistry as key elements to map and evaluate massive sulfide (SMS) deposits.
Abstract: Since the early discovery of a black-smoker complex in 1978 on the East Pacific Rise at 21°N, speculations and expectations have been driven about the potential and perspectives of mining seafloor massive sulfide (SMS) deposits in the deep-ocean. With a worldwide accelerating industrialization, emerging markets, increased commodity prices and metal demand, and advance¬ments in deep-water mining and extraction technologies, mining of SMS may become economically feasible in the near future (Kowalczyk, 2008). However, we still know little about the resource potential of SMS deposits, and the development of geophysical methods for an assessment of their spatial extent, composition, and inner structure is crucial to derive a proper assessment of their economic value. Novel geophysical mapping techniques and exploration strategies are required to locate extinct and buried clusters of SMS deposits, away from the active vent fields and of larger economic potential, but are difficult to find and sample by conventional methods. In 2015 the International Seabed Authority (ISA) assigned an exploration license for polymetallic sulfide deposits to the German Federal Institute for Geosciences and Natural Resources (BGR) in a specified area comprising 100 patches, each 10 . 10 km in size, distributed along the Central and Southeastern Indian Ridge. The challenge to acquire high resolution near-surface electromagnetic (EM) data in such geologically and morphologically complex mid-ocean ridge environments has been addressed by our recent development of the deep-sea profiler Golden Eye that utilizes a frequency-domain electromagnetic (FDEM) central loop sensor, of 3.3 m diameter (Muller et al., 2016). This system has been used in 2015 and 2017 to map active and relict hydrothermal vent fields in the SMS licensing areas. Aside from technological developments, this paper discusses new data processing routines and methods to unravel the conductivity-depth-distribution, induced polarization and magnetic susceptibility, and joint interpretation with geochem¬istry as key elements to map and evaluate SMS deposits.

13 citations


Journal ArticleDOI
TL;DR: In this article, a real-time microseismic detection at offshore hydrocarbon fields is proposed to become a standard monitoring tool for an improved health, safety and environment (HSE) and cost saving.
Abstract: Real-time microseismic detection at offshore hydrocarbon fields is on its way to becoming a standard monitoring tool. Recently, increased focus on injection and overburden surveillance for an improved health, safety and environment (HSE) and cost saving has led to this development. Several hydrocarbon fields are already equipped with permanent reservoir monitoring (PRM) systems with seismic sensors permanently installed at the seafloor (Caldwell et al., 2015), and similar installations are planned or under consideration for some other offshore fields. PRM systems are in principle designed for acquiring active time-lapse seismic data 1-2 times per years, and as such, they are not used during most of their lifetime. But apart from active seismic, PRM systems can also be used for recording passive seismic data. With appropriate processing and analysis methods, such as microseismic event detection, the continuous stream of passive data can be converted into useful real-time subsurface information. This results in improved HSE, and therefore a more valuable PRM system. In 2014, a mini-PRM system with 172 multi-component sensors was installed at the Oseberg field, offshore Norway. The mini-PRM system indeed demonstrated the feasibility of injection and overburden surveillance using real-time passive seismic (Bussat et al., 2016). Despite high installation costs, a cost/benefit evaluation shows net benefits for the Oseberg system owing to better control on waste injection rates. The next step, and topic of the current paper, is to scale up and transfer the learnings from the small Oseberg system to the large Grane PRM system, so as to enable processing and analysis of large amounts of passive data and allow for real-time injection monitoring at Grane. Compared to the Oseberg case, the microseismic data processing is distributed and optimized to be able to use many more sensors and monitor larger subsurface volumes with increased resolution. Moreover, the detection method is improved from an originally strict semblance-based method to also include signal-to-noise (S/N) analysis of the semblance-weighted stack. Crucial noise filtering is integrated into the real-time processing flow, enhancing the sensitivity of the system and ensuring the best possible detection/localization at any time. Key personnel receive an alert immediately after an event detection. This makes the monitoring fully integrated into the follow-up of the daily operation.

9 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a monitoring of directivity as the most promising way to determine the orientation of fault planes and associated slip vectors, which would lead to a more precise estimation of hydrocarbon production and give greater value to microseismic data.
Abstract: Currently, there are four widely discussed theories used to describe how microseismicity interacts with hydraulic fractur¬ing. Each theory has a different implication for the interpreta¬tion of microseismicity used for reservoir modelling. Therefore, better understanding of the relationship between microseismic¬ity and hydraulic fracture stimulation is needed before further reservoir models are developed and applied. This would lead to a more precise estimation of hydrocarbon production and give greater value to microseismic data. We may use either seismic or non-seismic methods. While non-seismic methods provide an independent view of hydraulic fracturing they only provide a limited amount of information on the relationship between hydraulic fracturing and microseismicity. We propose micro¬seismic monitoring of directivity as the most promising way to determine the orientation of fault planes and associated slip vectors. Although this is a suitable method it requires sensors in multiple azimuths that are well coupled to obtain reliable high frequency signals. We suggest using Distributed Acoustic Sensing (DAS) sensors which are capable of sampling high frequencies and may provide continuous data along long offsets at reasonable costs. Hydraulic fracturing stimulation is accompanied by induced microseismic events resulting from the reactivation of pre-exist¬ing fractures or the creation of new fractures (e.g., Grechka and Heigl, 2017). Locations of microseismic events are then used to map the fracture geometry: the direction of fracture propagation, fracture length and height. Numerous authors and companies try to convert the measured microseismic information into estima¬tions of reservoir production. These approaches use microseis-micity to constrain linear and non-linear diffusion (e.g., Grechka et al., 2010), discrete fracture networks (Williams-Stroud et al., 2013), tensile opening of hydraulic fracture (Baig and Urbancic, 2010), or bedding plane slip (Rutledge et al., 2013; Stanek and Eisner, 2013). Many of these approaches aim to directly map microseismicity to production prediction and other highly valuable information but the reality of the current state of the art is that we do not know the exact nature of microseismicity and hydraulic fracture interaction with microseismicity. Therefore, many of the reservoir simulators based on microseismicity are subject to significant uncertainty.

7 citations


Journal ArticleDOI
TL;DR: In this article, the authors demonstrate the advanced capabilities of electromagnetic techniques (magnetotelluric sounding, direct current, time-domain EM (TDEM), induced polarization (IP), controlled source EM, etc.) in solving a wide range of prob-lems, especially those which seismic surveys are not effective in solving.
Abstract: Traditional exploration and prospecting for hydrocarbons (HC) has traditionally been carried out using seismic techniques. At the same time, it is well known that seismic techniques are inefficient in the presence of high-velocity layers (which reduce resolution at great depth), igneous rocks, thrusts within the crystalline basement, and tight limestone. Being sensitive to geological structure, seismic techniques are characterized by low resolution at the level of micro-parameters such as fluid type, porosity/ fracture, and degree of pores HC-saturation. Moreover, technical complications, e.g., highly rugged topography, dense vegetation and object remoteness may make seismic surveys difficult, expensive, or even impossible. Therefore, non-seismic methods are increasingly used in HC exploration and prospecting. In particular, electrical and electromagnetic (EM) methods (magnetotelluric sounding, direct current, time-domain EM (TDEM), induced polarization (IP), controlled source EM, etc.) complement seismic techniques and increasingly replace them (Johansen, 2008; Key, 2012; Zhang et al., 2014; Barsukov and Fainberg, 2015; Berdichevsky et al., 2015, among others). In parallel with EM, the efficient technologies for 3D modelling and inversion (see, for instance, a review paper by Siripunvaraporn (2012) and references therein) and integrated analysis of EM and other geophysical data (see, for instance, a review paper by Bedrosian (2007) has been created. Application of these methods in solving problems of exploration geophysics enabled progression in exploration, prospecting, and devel¬opment of HC deposits (see, for instance, a review paper by Strack (2014) and references therein). Meanwhile, very recent advances in indirect estimating of rock geophysical properties of lithologic reservoirs from electromagnetic sounding data (Spichak and Goidina, 2016; Spichak and Zakharova, 2015, 2016; Spichak, 2017) open up new possibilities related to development of more sophisticated approaches to estimation of the reservoir properties and its potential assessment. The purpose of this paper is to demonstrate the advanced capabilities of electromagnetic techniques in solving a wide range of prob-lems, especially those which seismic surveys are not effective in solving.

7 citations


Journal ArticleDOI
TL;DR: Spectrum has developed a systematic approach for identifying working hydrocarbon systems in undrilled or frontier basins as mentioned in this paper, which is currently being applied to all the basins in which Spectrum operates.
Abstract: Exploring in frontier basins carries with it the challenge of identifying and derisking hydrocarbon play elements where well data and consequently, lithological and stratigraphic information is often sparse to absent. In this setting, seismic data will typically be the only source of information available to identify potential play fairways and derisk the corresponding petroleum system elements. Until recently, the exploration focus of seismic interpretation has predominantly been on developing methodologies to identify structure, traps and reservoir rather than interrogate source rocks. Geophysical deconstruction of the data again focuses on categorizing hydrocarbon-bearing reservoirs, including AVO/AVA (Amplitude vs Offset/Angle) analysis, bright spot/dim spot and flat spot identification. Yet the lack of focus on source is curious as, particularly in frontier basins, the ability to derisk presence and effectiveness (total organic carbon percentage (TOC%) and maturity) of a viable source is key. Therefore, a study has been undertaken to evaluate a number of basins and to characterize the hydrocarbon system and source rocks therein. In conducting this study we have developed a workflow and characterization criteria that provides a significant development in the ability to derisk unproven hydrocarbon plays. Spectrum has developed a systematic approach for identifying working hydrocarbon systems in undrilled or frontier basins. This workflow is currently being applied to all frontier basins in which Spectrum operates, and in this study the results from a sample of four such basins is presented as well as how lessons learnt in a given basin can provide insight into on-trend and conjugate margins.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the general ability to monitor microseismic events in an offshore setting and presents results from a real-time monitoring pilot in Norway, and validate the concept of continuous realtime monitoring from a fibre-optic deep-water installation by comparing their automatic detections with data from a regional seismograph network.
Abstract: To achieve high recovery rates, modern-day production management can benefit from not only snapshot images of the state of the reservoir at regular time intervals, but also continuous monitoring of the dynamic processes induced by pressure changes and fluid movement during production. Production management using time-lapse 4D snapshots is reactive, i.e., adjustments addressing the sweep efficiency or reservoir integrity can only be instigated once the next snapshot image is available after acquisition, processing and interpretation, often years later. For a more proactive reservoir management, it is important to have dynamic reservoir information in real time between the seismic time-lapse snapshots. Such information is contained in microseismic monitoring data and in surface or borehole deformation measurements. If sensors are permanently installed, this information comes at a negligible additional cost, provided that the data can be transferred to shore in real-time and processed automatically. Time-lapse 4D snapshot images are typically obtained over a period of years and are inadequately sampled for capturing dynamic reservoir changes taking place over much shorter time intervals, from hours to days. Such changes can include variations in the permeability caused by scaling or compaction (Barkved and Kristiansen, 2005), changes of the fluid phase owing to pressure variations (e.g., Osdal et al., 2006), unintended alterations of the flow paths owing to out-of-zone injections and fault reactivation (e.g., Schinelli et al., 2015), or movement in the overburden, potentially compromising the integrity of infrastructure in the form of casing failures or seafloor subsidence (e.g., Yudovich et al., 1989; Hatchell et al., 2017). While 4D seismic data can capture the cumulative effect of such processes by evaluating differences in still images every few years, they provide little information about when exactly the associated dynamic changes occurred and how they relate to changes in flow rate and pressure that may have been captured through continuous measurements in the wellbores accessing the reservoir. Microseismic events from within or around a producing reservoir can be indicative of reservoir fluid pathways and sub-seismic reservoir compartmentalization (e.g., Maxwell and Urbancic, 2001), or stress changes and associated production- related deformations in the vicinity (e.g., Teanby et al., 2004; Zoback and Zinke, 2002; Wuestefeld et al., 2011, 2013). Continuous monitoring of seismicity can also help in assessing deformation-related risks to infrastructure over the life of a field. Combined with pressure and flow rate, such data can provide the necessary information to capture dynamic processes in the reservoir right when they happen. In conjunction with geomechanical flow modelling, production optimization strategies can thus be validated and adjustments can be properly planned at an early stage. The result will be an improved sweep efficiency with a further increased recovery factor, as well as better risk assessment with the avoidance of potentially costly mitigation actions. A better understanding of, and continuous information about, the reservoir dynamics may even help to plan a 4D seismic strategy better. This can include better definition of suitable intervals for the acquisition of time-lapse images. These intervals could be irregular, depending on the state of reservoir development and type of recovery method. Continuous monitoring may also provide a means to high-grade areas of the reservoir for partial 4D imaging at lower cost and faster turnaround in between ‘full’ time-lapse surveys (Hatchell et al., 2013). In this paper we discuss the general ability to monitor microseismic events in an offshore setting and presents results from a real-time monitoring pilot in Norway. We validate the concept of continuous real-time monitoring from a fibre-optic deep-water installation by comparing our automatic detections with data from a regional seismograph network.

Journal ArticleDOI
TL;DR: In this paper, Baeten et al. proposed to add low-frequency content to seismic data to enable waveform inversion for velocity model determination or refinement, improving the resolution of seismic data and obtaining structural information of deeper reservoirs.
Abstract: Extending the Vibroseis acquisition bandwidth towards low frequencies has become an increasing trend in land seismic exploration. It is clear that adding low-frequency content to seismic data can be beneficial to enabling waveform inversion for velocity model determination or refinement, improving the resolution of seismic data and obtaining structural information of deeper reservoirs. These geophysical benefits have been highlighted by many authors, for example Baeten et al. (2013) and Mahrooqi et al. (2012). However, using the Vibroseis method to acquire low frequency seismic data becomes very challenging. Owing to mechanical and hydraulic limitations, most conventional seismic vibrators cannot produce sufficient low-frequency force for transmission of seismic energy into deep ground below 10 Hz. Because of this fact many sweep design techniques aimed at enhancing the Vibroseis low-frequency contents have been developed (Bagaini 2006, 2008; Sallas 2010; Baeten 2011). These low frequency sweeps can enable conventional vibrators to shake as close as possible to their low-frequency mechanical limitations. Unfortunately, this approach requires a lower drive level and hence a slower sweep rate resulting in a longer sweep length. Therefore, the generation of extra low-frequency bandwidth with low frequency sweep methods usually has an impact on productivity.

Journal ArticleDOI
TL;DR: However, in some cases it is not possible to repeat the survey geometries between vintages when the geometry differences are small, corrections can be made during data processing by including steps such as 4D binning, which aim to preserve those seismic traces that are associated with the smallest variation in source and receiver positions.
Abstract: Successful time-lapse (or 4D seismic) studies require special care when it comes to the removal of undesirable artifacts caused by the differences in acquisition geometries By attempting to repeat the source and receiver geometries between surveys as precisely as possible any subsequent 4D noise is minimized so that subtle seismic signal variation induced by reservoir production can be detected It is commonly accepted that the required repeatability accuracy is directly linked to the desired sensitivity and resolution of the 4D signal Illumination studies prior to any 4D experiments ensure that the reservoir is illuminated in as identical a fashion as possible between base and monitor survey so that the desired 4D effects can be recovered In fact, it is common practice to plan and design 4D surveys with optimal acquisition repeatability in mind However, in some cases it is not possible to repeat the survey geometries between vintages When the geometry differences are small, corrections can be made during data processing by including steps such as 4D binning, which aim to preserve those seismic traces that are associated with the smallest variation in source and receiver positions The process of 4D binning is particularly effec¬tive when the acquisition for both the base and monitor survey are very similar such as streamer on streamer or OBS (Ocean Bottom Seismic) on OBS surveys Nevertheless, 4D binning does not perform well when both acquisition vintages comprise significant differences in their respective source and receiver positions This is for example the case when different streamer acquisition azimuths are involved or when large cable feathering differences at long offsets are observed or indeed when a towed streamer survey is to be compared with an OBS acquisition

Journal ArticleDOI
TL;DR: In this paper, the authors describe a new comprehensive high-density experimental project to readdress these ever-challenging seismic issues by imaging the reservoir from both above and within existing boreholes.
Abstract: Imaging through a heterogeneous shallow gas-charged overburden, such as a gas cloud, presents several imaging challenges and is a demanding problem to solve. Our preferred technical solution for imaging beneath gas clouds is to utilize converted wave imaging (Radzi et al., 2015), but this is not always available or cost effective and velocity model building is still difficult. Many previous case studies have been produced from Malaysia which demonstrate subsurface imaging techniques and improvements for fields affected by gas clouds, e.g., Akalin et al. (2010); El Kady et al. (2012); Abd Rahim et al. (2013); Ghazali et al. (2016) and Gudipati et al. (2018). In this paper, we describe a new comprehensive high-density experimental project to readdress these ever-challenging seismic issues by imaging the reservoir from both above and within existing boreholes. The integration of multiple technologies has significantly improved the subsurface images of the field including better-quality velocity models below gas clouds. The new data reveal a larger scale of near-surface heterogeneities than previously expected and future studies will selectively reprocess subsets of the acquired data in order to optimize the images; and, by extension to other similar fields, address a cost-effective imaging strategy.

Journal ArticleDOI
TL;DR: For example, this article showed that high rates of feit induced events in previously quiet areas are now considered critical for public safety and social licence to operate (Petersen et al, 2016).
Abstract: General awareness of induced seismicity and gas leakage related to energy reservoir exploitation has been on the rise for several years (McGarr, 2002; Davies et al., 2013). This includes events from short-term Huid injection for reservoir simulation (Giardini, 2009; Atkinson, 2016), Long term hydrocarbon extraction (Van Thienen-Visser and Breuness, 2015), under-ground storage of natural gas (Cesca et al., 2014), waster water (Ellsworth, 2013), and carbon dioxide (Zoback and Gorelick, 2012). High rates of feit induced events in previously quiet areas are now considered critical for public safety and social licence to operate (Petersen et al, 2016). Concerns about con-tamination of ground water and climate effects have followed suit (Darrah et al., 2014; Davies et al., 2014).

Journal ArticleDOI
TL;DR: An unmanned aircraft system (UAS) for aeromagnetic surveying has been developed on the platform of Geoscan-401 rotary-wing unmanned aerial vehicle as mentioned in this paper.
Abstract: An unmanned aircraft system (UAS) for aeromagnetic surveying has been developed on the platform of Geoscan-401 rotary-wings unmanned aerial vehicle. The UAS includes a light rubidium vapor magnetometer (RVM) and an additional differential GPS placed on a loop attached to the copter’s body by 50-meter cable. In operation mode, the aerodynamic design of the loop keeps it in a horizontal position. To define the metrological characteristics of the RVM, a series of tests were conducted using the commercially available magnetometers and a calibration test station. On September 2016, two experimental aeromagnetic surveys with the UAS were conducted over an area of 0.7 sq. km located 30 km northward of Saint Petersburg, Russia. Approximately 32 km of survey lines were flown in two flights of 40 minutes each. During the first flight - Survey 1, the UAS was kept at 30 m above the ground surface (altitude 150–165 m), and second - Survey 2 one went at a constant altitude of 160 m (25–40 m above the surface). The variations from the nominal altitude were approximately ± 2 m. The deviation from the line path reaches 10 m for the flights of southward direction because of southwest wind during the survey.

Journal ArticleDOI
TL;DR: The applications of airborne ground-penetrating radar (GPR) antennas or systems are not as widespread and well-developed as their counterparts, the conventional ground coupled antennas as mentioned in this paper.
Abstract: The applications of airborne ground-penetrating radar (GPR) antennas or systems are not as widespread and well-developed as their counterparts, the conventional ground coupled antennas and While GPR airborne systems are not a novelty at systems. This has led to a situation in which airborne systems are normally used in a niche industry with very strict guidelines and expectations, such as the road surveying industry (Saarenketo and Scullion, 2000). Airborne antennas are, regrettably, often operated in very inefficient ways, sometimes disregarding basic physical laws or plain, fundamental principles of GPR technology. When the antenna is lifted from the ground a whole new set of problems and complexities arise and, if not taken into consideration, they can lead to very unreliable data and wrong interpretation of the obtained results.

Journal ArticleDOI
Anders Dræge1
TL;DR: In this article, a new and patented concept for fluid substitution that can be integrated with machine learning to provide robust and simple fluid substitution with approximately the same or better accuracy as Gassmann theory is presented.
Abstract: This study presents a new and patented concept for fluid substitution that can be integrated with Machine Learning to provide robust and simple fluid substitution with approximately the same or better accuracy as Gassmann theory. The method is called ‘ROck physics Fluid Substitution’ (ROFS) and integrates machine learning and rock physics. ROFS allows for rapid and simple fluid substitution that in many cases give more physically consistent results than applied Gassmann theory. A stepwise workflow for the method is given. Comparison with Gassmann theory shows that the ROFS approach better predicts velocities in core plugs that are substituted from dry to brine filled. When applying the method on well logs, it is also demonstrated that for high porous rocks where the Gassmann assumptions are met, the methods give very similar results. But for intermediate-to-low porosity rocks, Gassmann theory seems to overpredict the fluid effect while the new model is more realistic. The method can be applied for both siliciclastic rocks and carbonates. By using a rock physics model for carbonates, the new method can account for the effect of microstructure variations such as pore shape variations and cracks when performing fluid substitution.

Journal ArticleDOI
TL;DR: The EAGE workshop as mentioned in this paper addressed the problem of underperformance in petroleum explora¬tion is directly related to over-optimistic evaluations of the size of undrilled prospects.
Abstract: The age old problem of underperformance in petroleum explora¬tion is directly related to over-optimistic evaluations of the size of undrilled prospects. Although the assessment of geological risk in proven plays is usually not a problem, the chance of economic success is generally much lower than predicted by companies as a consequence of the deficit in expected volumes. An EAGE workshop held in Copenhagen in June 2018 addressed this issue and one of the outcomes was the recommendation that prospect evaluations are more closely linked to historical data, particularly on the downside size of discoveries.

Journal ArticleDOI
TL;DR: A group of scientists from six countries (France, Netherlands, Norway, Saudi Arabia, UK and the US) met over three days in September 2017 in Houston, Texas, to brainstorm and debate the most promising research directions needed to make breakthroughs in the areas of injectivity and capacity that currently pose challenges to carrying out large-scale (gigatonnes CO2 per year) geologic carbon sequestration.
Abstract: A group of scientists from six countries (France, Netherlands, Norway, Saudi Arabia, UK and the US) met over three days in September 2017 in Houston, Texas, to brainstorm and debate the most promising research directions needed to make breakthroughs in the areas of injectivity and capacity that currently pose challenges to carrying out large-scale (gigatonnes CO2 per year) geologic carbon sequestration. Several CO2 storage projects around the world have demonstrated the feasibility of injecting and storing CO2 at the mega-tonne per year scale. These include the long-running Sleipner project (Norway) which started in 1996 and which has stored ~17 Mt of CO2 to date, and the Illinois Basin Decatur Project (USA) which has stored approximately 1 Mt of CO2. New projects have started over the last few years, including the QUEST project in Canada, the Gorgon project in Australia, and the Industrial Carbon Capture and Storage (ICCS) project at Decatur, Illinois, which will inject 1 Mt CO2/yr. These projects along with a wealth of injection experience from the oil and gas industry over decades, supported by an extensive literature of theory and modelling analyses, provide confidence in the subsurface storage concept intrinsic to CCUS. The challenge ahead is to ramp up CCUS technology to be able to safely store CO2 at the gigatonne (Gt) per year scale to meet global CO2 emissions reductions targets. Although sufficient capacity exists in theory to store CO2 at the Gt/year scale in the continental and offshore sedimentary basins of North America, Europe, and worldwide, there are many technical challenges that need to be addressed. First, more accurate estimates of storage capacity are needed over large areas (~103–104 km2) that have been targeted for storage, with associated challenges for site characterization, monitoring and storage verification. Second, whereas the few current projects are isolated in the given storage reservoirs and often within entire sedimentary basins, injections at the Gt/year scale must involve multiple large-scale projects potentially within tens of kilometres of one another and accessing similar stratigraphic intervals and probably similar reservoir units. To achieve this degree of scale-up, a better understanding of the permissible pressure increase in these large regions is needed. Pressurization from injection projects is known to extend from 10s to 100s of km from the injection wells, and interference among neighbouring projects is inevitable. Thus, there is the need for detailed understanding of the tolerance for pressure rise and potentially the need for pressure management. Furthermore, large-scale projects will require smart methods for controlling and optimizing CO2 injection, which will involve developing better understanding of the links between small (e.g., sub-pore and pore scale) and large-scale physical processes in the reservoir. The key technical issues, questions, and areas in need of better understanding include: • CO2 migration and trapping processes; • Understanding when and how caprocks fail; • Physics- and chemistry-based understanding of CO2 flow at all scales in the reservoir and storage complex; • Impact of flow processes on storage at multiple scales within heterogeneous rock media. In addition to laboratory and field studies, there are many challenges that will require developments in the theory, modelling, and simulation of CO2 storage processes. The research challenges identified by the group on Storage Injectivity and Capacity aim to exploit recent advances in the understanding of flow processes and in the use of high-performance computing, using large data sets to improve the forecasting of CO2 migration and trapping processes, the nature of pressurization and dynamic pressure limits, reservoir fracturing and dynamic geomechanical behaviour of rock units. After brainstorming the issues, the Expert Panel developed three ‘Principle Research Directions’ (PRDs) considered to be essential to the future ramp-up of CCUS to the Gt scale: • Advancing multi-physics and multi-scale fluid flow to achieve Gt/yr capacity; • Dynamic pressure limits for Gigatonne-scale CO2 injection; • Optimal injection of CO2 by control of the near-well environment. These global research propositions are outlined in the report (Mission Innovation, 2017). Here we summarize the research ambitions involved. The expert group focused primarily on storage in saline aquifers and depleted oil and gas fields, as they are expected to have the largest potential for Gt-scale storage, although the concepts will be relevant to all storage options (including CO2EOR). A key part of the learning process for globally significant scale-up of CO2 storage has been the insights gained from early mover projects. This experience has been summarized in various monographs [e.g., Chadwick et al., 2008; Hitchon, 2012] and review papers [e.g., Jenkins et al., 2015; Pawar et al., 2015]. Some key achievements in the development of CO2 storage include: • How seismic monitoring can be used to monitor saturation and pressure changes associated with the growth of the CO2 plume; • How downhole pressure monitoring can be used to understand the pressure distribution and evolution at storage sites; • Understanding the rock mechanical response to injection; • Insights into the complexity of storage reservoirs and the impact of heterogeneity on CO2 flow paths; • Development of optimal monitoring and risk management procedures. These early-mover CO2 storage projects and the associated research studies demonstrate both the technical viability of CO2 storage and its challenges, while also pointing to the key technologies involved in project execution. This gives us an excellent basis for the research directions identified in this report, focused on the theme of significant scale-up via improved insights from multi-physics analysis of CO2 storage (Figure 1).


Journal ArticleDOI
TL;DR: Esestime et al. as discussed by the authors showed that after 50 years of exploration the basin is mature, by revealing a new generation of prospectivity with modern 3D acquisition and processing.
Abstract: Global exploration has seen a dramatic upturn in 2018, with close to 4 billion barrels of oil equivalent discovered in the first half of the year alone, and many exciting wildcat wells still to come. Although the oil price slide of 2014 changed the industry, with the focus now on commerciality and risk reduction, it is of interest that most of the new discoveries this year have been made in commercially challenging deep water. The reason for this is that our industry has been exploring shallow-water oil-prone basins since the invention of marine seismic methods in the 1950s, and they appear now to be ‘mature’ and depleted in material low-risk opportunities. Indeed the future of shallow-water exploration has been portrayed as binary: either mopping up around what we know or exploring new frontiers. However, in South Gabon we challenge the established idea that after 50 years of exploration the basin is mature, by revealing a new generation of prospectivity with modern 3D acquisition and processing (Figures 1 and 2). The play primarily chased so far has been the Gamba Sandstone play, the seismic imaging of which has always been a significant challenge, owing to the complex velocities in the post-salt geology, the heterogeneity and the halokinesis of the Ezanga Salt. Yet with the exception of the Muruba-2 discovery, few wells have deliberately targeted the syn-rift section below the Gamba, leaving the pre-salt section completely unexplored (Figure 1). Diligent planning and acquisition (Esestime et al., 2017, 2018) of 3D data in South Gabon covering an area eight times the size of Greater London (11,500 Km2), has enabled an imaging veil to be lifted. The intra-syn-rift is now visible and is revealing large-scale hydrocarbon prospects, capable of attracting interest from global exploration players (Figure 3).

Journal ArticleDOI
TL;DR: eSeismic as discussed by the authors is a seismic methodology based on the emission and recording of continuous source and receiver wavefields, which aims to reduce the potential environmental impact of marine seismic acquisition, which the new methodology seeks to reduce.
Abstract: eSeismic is a novel seismic methodology based on the emission and recording of continuous source and receiver wavefields. One of the motivations behind developing the methodology has been the increased focus on the potential environmental impact of marine seismic acquisition, which the new methodology seeks to reduce. A particular focus has been placed on the peak sound pressure levels emitted from seismic sources and their potential impact on marine mammals and fish with swim bladders. Consequently, authorities across the world have started to introduce stronger regulations concerning the use of seismic sources. The industry has responded by engaging in the development of marine vibrator systems that emit lower-amplitude transient signals and hence are expected to comply with stricter environmental regulations. Different marine vibrator systems are currently being developed or tested but none have reached full-scale commercial readiness. The methodology described in this paper has not been developed with any specific marine source technology in mind. The desired signal for the outlined methodology is that of white noise, as this enables deconvolution of the data with the total source wavefield. Indeed any type of mechanical device that produces a source signal that approaches the properties of white noise can be utilized in the eSeismic method. As described later, existing air gun equipment used on board modern seismic vessels has been used to generate the results discussed here and is equally suited for this method as are any marine vibrator systems of the future.

Journal ArticleDOI
TL;DR: In this article, the authors look at the different techniques applied to fluvial reservoir characterization and modelling, review both the algorithms and some of the limitations faced during the modelling steps, and finally introduce a new algorithm that can incorporate different landforms into the reservoir model for improved representation of fluvia depositional environments.
Abstract: Fluvial depositional environments play a major role as hydrocarbons reservoirs around the world and have therefore received considerable attention in the domain of reservoir modelling (Keogh et al., 2007). Modelling of fluvial reservoirs represents a vast research field. The wide range of scales, the heterogeneity of deposits, the complex geometry has made them highly challenging to incorporate into subsurface models to replicate the reservoir behaviour in 3D. Multiple facies modelling techniques have been used to mimic these deposits and their geometries in the most realistic way. However, algorithmic limitations may sometimes render oversimplified models, reducing their predictive power. Furthermore, more detailed and abundant well information as well as seismic data are now often available, and honouring this information is crucial to ensure models will support long-term decision making. In this article, we look at the different techniques applied to fluvial reservoirs characterization and modelling, reviewing both the algorithms and some of the limitations faced during the modelling steps, and we’ll finally introduce a new algorithm that can incorporate different landforms into the reservoir model for improved representation of fluvial depositional environments. We will also investigate how this next generation object-based modelling method can handle data from a real reservoir case.

Journal ArticleDOI
TL;DR: In this article, a case-history demonstrates how the adoption of this strategy resulted in a step change in the applicability of seismic measurements from being used only as a structural interpretation volume, to becoming a multi-purpose dataset suitable for both structural interpretation and advanced reservoir characterization.
Abstract: In the current challenging oil and gas environment it is now more important than ever to maximize return on investment. At a time when projects can face time and budget constraints, and the number of wells required to access both unconventional and conventional resources increases, improvements in the accuracy of seismic information can create significant value during the decision-making process. Seismic inversion is used as a tool to de-risk exploration targets and to better define the extent and the composition of existing hydrocarbon reservoirs. When considering the ‘value of information’ delivered by any new measurements (e.g. new seismic acquisition, processing, and inversion) we must evaluate whether the new information is relevant and economic to obtain. The use of seismic inversion necessitates applied acquisition and processing technologies that can optimize both the data signal-tonoise ratio and spatial reflection wavelet stability, providing the best possible vertical and spatial resolution in the area of interest. Through achieving these technical outcomes, new information will be both relevant and material to the project objectives. The use of acquisition and processing technology can make obtaining the new information economic. In this paper, a case-history demonstrates how the adoption of this strategy resulted in a step change in the applicability of seismic measurements from being used only as a structural interpretation volume, to becoming a multi-purpose dataset suitable for both structural interpretation and advanced reservoir characterization.

Journal ArticleDOI
TL;DR: The South Sumatra Basin is one of the most prolific hydrocarbon provinces in Indonesia as mentioned in this paper, with estimated known petroleum resources of 4.3 billion barrels of oil equivalent (bboe).
Abstract: The South Sumatra Basin is one most prolific hydrocarbon provinces in Indonesia (Figure 1) with estimated known petroleum resources of 4.3 billion barrels of oil equivalent (bboe) (Klett et al., 1997). Talang Akar Formation (TAF) is the main source and reservoir rock in the basin (Sarjono and Sardjito, 1989; Sosrowidjojo et al., 1994). It accounts for more than 75% of the cumulative oil production in the province and has approximately 2 bboe ultimate recoverable reserves (Tamtomo et al., 1997).

Journal ArticleDOI
Laryssa Oliveira1, Francis Pimentel2, Manuel Peiro1, Pedro J. Amaral2, João Christovan2 
TL;DR: In this paper, the authors present a quantitative interpretation of seismic data for reservoir characterization. But their work is limited to two types of data: seismic data acquisition and processing, and seismic interpretation.
Abstract: Quantitative seismic interpretation plays an important and growing role for reservoir characterization, as seismic data become increasingly reliable as a result of the latest advances in acquisition and processing techniques.

Journal ArticleDOI
TL;DR: In this article, the authors used the full waveform inversion velocity model as a constraint during acoustic impedance inversion to further delineate small-scale velocity anomalies associated with the highly compartmentalized reservoir units.
Abstract: Seismic imaging and reservoir characterization in the Fortuna region, offshore Equatorial Guinea, is beset with various geophysical challenges related to the presence of extensive, but small-scale low-velocity gas pockets, which give rise to significant and cumulative image distortion at target level. This distortion had not been resolved in a vintage 2013 broadband pre-stack depth migration project, as the velocity model was not sufficiently well resolved, but was subsequently addressed successfully in a project conducted using high-resolution non-parametric tomography with improved broadband deghosted data. The primary objective of that subsequent project was to improve the understanding of the internal structure of the Viscata and Fortuna reservoirs, and this objective was met via clearer internal imaging of these reservoir intervals and the overlying gas-charged sediments. The follow-on work considered here deals with the use of full waveform inversion to further delineate small-scale velocity anomalies associated with the highly compartmentalized reservoir units, and also to use the full waveform inversion velocity model as a constraint during acoustic impedance inversion. We compare the results of impedance inversion using both a conventional approach (with the well-log velocities to build the background trend), with a new approach using a trend derived from high-resolution waveform inversion velocities.

Journal ArticleDOI
TL;DR: Roden et al. as discussed by the authors used self-organizing maps (SOMs) to identify natural patterns or clusters in seismic data, where the scale of the patterns identified by this machine learning process is on a sample basis, unlike conventional amplitude data where resolution is limited by wavelet.
Abstract: Over the past eight years the evolution of machine learning in the form of unsupervised neural networks has been applied to improve and gain more insights in the seismic interpretation process (Smith and Taner, 2010; Roden et al., 2015; Santogrossi, 2016: Roden and Chen, 2017; Roden et al., 2017). Today’s interpretation environment involves an enormous amount of seismic data including regional 3D surveys with numerous processing versions and dozens if not hundreds of seismic attributes. This ‘Big Data’ issue poses problems for geoscientists attempting to make accurate and efficient interpretations. Multi-attribute machine learning approaches such as self-organizing maps (SOMs), an unsupervised learning approach, not only incorporates numerous seismic attributes, but often reveals details in the data not previously identified. The reason for this improved interpretation process is that SOM analyses data at each data sample (sample interval X bin) for the multiple seismic attributes that are simultaneously analysed for natural patterns or clusters. The scale of the patterns identified by this machine learning process is on a sample basis, unlike conventional amplitude data where resolution is limited by the associated wavelet (Roden et al., 2017).