scispace - formally typeset
Search or ask a question

Showing papers in "Software - Practice and Experience in 1995"


Journal ArticleDOI
TL;DR: ANTLR is introduced, a public‐domain parser generator that combines the flexibility of hand‐coded parsing with the convenience of a parser generator, which is a component of PCCTS.
Abstract: SUMMARY Despite the parsing power of LR=LALR algorithms, e.g. YACC, programmers often choose to write recursive-descent parsers by hand to obtain increased flexibility, better error handling, and ease of debugging. We introduce ANTLR, a public-domain parser generator that combines the flexibility of hand-coded parsing with the convenience of a parser generator, which is a component of PCCTS. ANTLR has many features that make it easier to use than other language tools. Most important, ANTLR provides predicates which let the programmer systematically direct the parse via arbitrary expressions using semantic and syntactic context; in practice, the use of predicates eliminates the need to hand-tweak the ANTLR output, even for difficult parsing problems. ANTLR also integrates the description of lexical and syntactic analysis, accepts LL(k) grammars for k> 1with extended BNF notation, and can automatically generate

679 citations


Proceedings ArticleDOI
A. Brie, F. Pampuri1, A.F. Marsala1, O. Meazza1
TL;DR: In this article, a new empirical mixture law that better fits laboratory measurements and field observations was proposed to evaluate gas volume from more generally compressional and shear slownesses, and the effect of shaliness can be accounted for.
Abstract: The introduction, a few years ago, of shear dipole sonic logs gave the industry the possibility to record high-quality shear aid compressional slownesses in soft formations. Data sets were acquired and analyzed on Vp/Vs versus {Delta}tc crossplots. Trends were identified in sands and shales and were matched with semi-empirical correlations based on the Gassmann formalism. These trends can be used to quality control shear logs and for quicklook lithology interpretation. The presence of gas in soft formations makes the interpretation more complicated as it can affect the sonic slownesses significantly, in particular the compressional. On the Vp/Vs crossplot, gas-bearing formations clearly differentiate from liquid filled formations. However, quantitative interpretation of the gas effect with the Gassmann equation gives deceptive results, although this model is successfully used in geophysics interpretation at a lower frequency. We indicate that the Gassmann model itself is not at fault. The responsibility is with the pore fluids mixture law used to compute the average fluid properties. We therefore propose a new empirical mixture law that better fits laboratory measurements and field observations. Using this revised model realistic gas trends can be identified on the Vp/Vs crossplot. The model can be solved to evaluate gas volume frommore » compressional and shear slownesses. Additionally, the effect of shaliness can be accounted for. The results agree well, in most instances, with flushed-zone saturation obtained from resistivity measurements and provide another opinion on gas volume. An additional product of the interpretation is to provide reliable values of dry-frame dynamic elastic constants of the rock for possible subsequent use in a rock mechanics evaluation.« less

271 citations


Journal ArticleDOI
TL;DR: The design and implementation of the Modula‐3 network objects system is described, including a thorough description of realistic marshaling algorithms for network objects, precise informal specifications of the major system interfaces, lessons learned from using the system, and performance results.
Abstract: The charter of SRC is to advance both the state of knowledge and the state of the art in computer systems. From our establishmentin1984, we have performed basic and applied research to support Digital’s business objectives. Our current work includes exploring distributed personal computing on multiple platforms, networking, programming technology, system modelling and management techniques, and selected applications. Our strategy is to test the technical and practical value of our ideas by building hardware and software prototypes and using them as daily tools. Interesting systems are too complex to be evaluated solely in the abstract; extended use allows us to investigate their properties in depth. This experience is useful in the short term in refining our designs, and invaluable in the long term in advancing our knowledge. Most of the major advances in information systems have come through this strategy, including personal computing, distributed systems, and the Internet. We also perform complementary work of a more mathematical flavor. Some of it is in establishedfieldsof theoretical computer science, such as the analysis of algorithms,computational geometry, and logics of programming. Other work explores new ground motivated by problems that arise in our systems research. We have a strong commitment to communicating our results; exposing and testing our ideas in the research and development communities leads to improved understanding. Our research report series supplements publication in professional journals and conferences. We seek users for our prototype systems among those with whom we have common interests, and we encourage collaboration with university researchers.

238 citations


Journal ArticleDOI
TL;DR: The structure of a decompiler is presented, along with a thorough description of the different modules that form part of a decomiler, and the type of analyses that are performed on the machine code to regenerate high‐level language code.
Abstract: The structure of a decompiler is presented, along with a thorough description of the different modules that form part of a decompiler, and the type of analyses that are performed on the machine code to regenerate high-level language code. The phases of the decompiler have been grouped into three main modules: front-end, universal decompiling machine, and back-end. The front-end is a machine dependent module that performs the loading, parsing and semantic analysis of the input program, as well as generating an intermediate representation of the program. The universal decompiling machine is a machine and language independent module that performs data and control flow analysis of the program based on the intermediate representation, and the program''s control flow graph. The back-end is a language dependent module that deals with the details of the target high-level language.

223 citations


Journal ArticleDOI
TL;DR: In statistical testing, a model is developed to characterize the population of uses of the software, and the model is used to generate a statistically correct sample of all uses ofThe software.
Abstract: In statistical testing, a model is developed to characterize the population of uses of the software, and the model is used to generate a statistically correct sample of all uses of the software. A software ‘usage model’ characterizes the population of intended uses of the software in the intended environment. Statistical testing based on a software usage model ensures that the failures that will occur most frequently in operational use will be found early in the testing cycle. The usage model is based on the software specification. The model can be developed in parallel with the software, thus shortening the elapsed time required to develop the deliver software. Usage modeling has been demonstrated to be an activity that improves the specification, gives an analytical description of the specification, quantifies the testing costs and, with statistical testing, provides a basis from which inferences of software reliability may be made. This paper describes the justification for statistical testing of software using a usage model, describes procedures for developing and using a usage model and discusses several usage modeling issues and recent advances in usage model applications.

177 citations


Journal ArticleDOI
TL;DR: This paper shows how to use string matching techniques in conjunction with lexicon indexes to find approximate matches in a large lexicon, and proposes methods for combining these techniques, and shows experimentally that these combinations yield good retrieval effectiveness while keeping index size and retrieval time low.
Abstract: Approximate string matching is used for spelling correction and personal name matching. In this paper we show how to use string matching techniques in conjunction with lexicon indexes to find approximate matches in a large lexicon. We test several lexicon indexing techniques, including n-grams and permuted lexicons, and several string matching techniques, including string similarity measures and phonetic coding. We propose methods for combining these techniques, and show experimentally that these combinations yield good retrieval effectiveness while keeping index size and retrieval time low. Our experiments also suggest that, in contrast to previous claims, phonetic codings are markedly inferior to string distance suggest measures, which are demonstrated to be suitable for both spelling correction and personal name matching

173 citations


Proceedings ArticleDOI
B.L. Beckner1, X. Song1
TL;DR: In this paper, a method for optimizing the net present value of a full field development by varying the placement and sequence of production wells is presented, where the authors frame the well placement and scheduling problem as a classic travelling salesman problem.
Abstract: A method for optimizing the net present value of a full field development by varying the placement and sequence of production wells is presented. This approach is automated and combines an economics package and Mobil's in-house simulator, PEGASUS, within a simulated annealing optimization engine. A novel framing of the well placement and scheduling problem as a classic travelling salesman problem is required before optimization via simulated annealing can be applied practically. An example of a full field development using this technique shows that non-uniform well spacings are optimal (from an NPV standpoint) when the effects of well interference and variable reservoir properties are considered. Examples of optimizing field NPV with variable well costs also show that non-uniform wells spacings are optimal. Project NPV increases of 25 to 30 million dollars were shown using the optimal, non-uniform development versus reasonable, uniform developments. The ability of this technology to deduce these non-uniform well spacings opens up many potential applications that should materially impact the economic performance of field developments.

148 citations


Proceedings ArticleDOI
TL;DR: In this article, the authors evaluated the historical frequency and severity of productivity impairment due to near-wellbore condensate buildup and identified reservoir parameters associated with severe productivity and recovery reduction.
Abstract: The depletion of gas condensate reservoirs to pressures below the dew point has been studied by reservoir engineers for many years. Pressure decline below the dew point pressure causes condensation to occur which creates a hydrocarbon liquid saturation in the reservoir. This process reduces liquid recovery and may reduce gas productivity and gas recovery. Exxon experience, particularly in low-productivity, high-yield gas condensate fields, suggests that liquid condensate formation can result in severe loss of well deliverability and therefore of gas recovery. This study was undertaken to evaluate the historical frequency and severity of productivity impairment due to near-wellbore condensate buildup and to identify reservoir parameters associated with severe productivity and recovery reduction. This study of gas condensate reservoirs included a survey of Exxon and published industry experience, a review of published laboratory data, and simulations with single well flow models. Data from 17 fields are included in this paper to demonstrate that severe loss of gas recovery occurs primarily in low productivity reservoirs. Production data from two wells were history matched with simple radial models to evaluate the potential range of the critical condensate saturation (the minimum mobile condensate saturation) and its impact on gas recovery. Published laboratory data for gas-condensate relative permeability were used as a starting point for these simulations. The primary conclusion from this study is that productivity impairment results in reductions in gas recovery for wells with a permeability-thickness below 1000 md-ft. The history matched simulations support a range of critical condensate saturations from 10% to 30%, in good agreement with published laboratory values.

138 citations


Proceedings ArticleDOI
TL;DR: In this paper, a new technique to determine excessive water and gas production mechanisms as seen in petroleum production wells has been developed and verified, based on systematic numerical simulation studies on reservoir water coning and channeling, it was discovered that log-log plots of WOR (Water/oil Ratio) vs time or GOR (Gas/Oil Ratio vs time show different characteristic trends for different mechanisms.
Abstract: A new technique to determine excessive water and gas production mechanisms as seen in petroleum production wells has been developed and verified. Based on systematic numerical simulation studies on reservoir water coning and channeling, it was discovered that log-log plots of WOR (Water/Oil Ratio) vs time or GOR (Gas/Oil Ratio) vs time show different characteristic trends for different mechanisms. The time derivatives of WOR and GOR were found to be capable of differentiating whether the well is experiencing water and gas coning, high-permeability layer breakthrough or near wellbore channeling. This technique was applied on wells in several fields in Texas, California, the Gulf Coast and Alaska. Plots using the actual production history data determined the production problem mechanisms. Together with well tests and logs, the technique was used to select well treatment candidates and to optimize treatments to enhance the return of investment.

132 citations


Journal ArticleDOI
TL;DR: Fundamental issues in building useful performance‐tuning tools are addressed and the experience with the AIMS toolkit for tuning parallel and distributed programs on a variety of platforms is described.
Abstract: Writing large-scale parallel and distributed scientific applications that make optimum use of the multiprocessor is a challenging problem. Typically, computational resources are underused due to performance failures in the application being executed. Performance-tuning tools are essential for exposing these performance failures and for suggesting ways to improve program performance. In this paper, we first address fundamental issues in building useful performance-tuning tools and then describe our experience with the AIMS toolkit for tuning parallel and distributed programs on a variety of platforms. AIMS supports source-code instrumentation, run-time monitoring, graphical execution profiles, performance indices and automated modeling techniques as ways to expose performance problems of programs. Using several examples representing a broad range of scientific applications, we illustrate AIMS' effectiveness in exposing performance problems in parallel and distributed programs.

123 citations


Journal ArticleDOI
TL;DR: Experiments with 500 Mb of newspaper articles show that in full‐text retrieval environments compression not only saves space, it can also yield faster query processing ‐ a win‐win situation.
Abstract: We describe the implementation of a data compression scheme as an integral and transparent layer within a full-text retrieval system. Using a semi-static word-based compression model, the space needed to store the text is under 30 per cent of the original requirement. The model is used in conjunction with canonical Huffman coding and together these two paradigms provide fast decompression. Experiments with 500 Mb of newspaper articles show that in full-text retrieval environments compression not only saves space, it can also yield faster query processing - a win-win situation.

Proceedings ArticleDOI
TL;DR: In this paper, the influence of temperature and pressure on the wettability of reservoir rocks was investigated, in which a Pendant Drop Interfacial Tension Cell was modified to measure contact angle and interfacial tension for two different crude oil-brine-quartz/calcite and mineral oil-distilled water systems.
Abstract: Wettability is a key parameter that affects the petrophysical properties of reservoir rocks. The objective of the present work is to investigate the influence of temperature and pressure on the wettability of reservoir rocks. An experimental method for die measurements of contact angle at elevated temperature and pressure has been developed, in which a Pendant Drop Interfacial Tension Cell was modified. Experimental results of contact angle and interfacial tension for two different crude oil-brine-quartz/calcite and mineral oil-distilled water systems over a range of temperatures and pressures are reported. Contact angle for the systems studied increased with pressure, increased with temperature for sandstone system and decreased with temperature for carbonate system.

Proceedings ArticleDOI
TL;DR: In this paper, the authors examine Biot's two-phase (fluid and rock), isothermal, linear poroelastic theory from the conventional porous fluid-flow modeling point of view.
Abstract: The purpose of this study is to examine Biot`s two-phase (fluid and rock), isothermal, linear poroelastic theory from the conventional porous fluid-flow modeling point of view Not`s theory and the published applications are oriented more toward rock mechanics than fluid flow Our goal is to preserve the commonly used systematic porous fluid-flow modeling and include geomechanics as an additional module By developing such an approach, complex reservoir situations involving geomechanical issues (eg, naturally fractured reservoirs, stress-sensitive reservoirs) can be pursued more systematically and easily We show how the conventional fluid-flow formulations is extended to a coupled fluid-flow-geomechanics model Consistent interpretation of various rock compressibilities and the effective stress law are shown to be critical in achieving the coupling The {open_quotes}total (or system) compressibility{close_quotes} commonly used in reservoir engineering is shown to be a function of boundary conditions Under the simplest case (isotropic homogeneous material properties), the fluid pressure satisfies a fourth-order equation instead of the conventional second-order diffusion equation Limiting cases include nondeformable, incompressible fluid and solid, and constant mean normal stress are analyzed

Journal ArticleDOI
TL;DR: The methods of path compression, level compression and data compression are combined to build a simple, compact and efficient implementation of a suffix tree that is superior to previous methods in many practical situations.
Abstract: We study the problem of string searching using the traditional approach of storing all unique substrings of the text in a suffix tree The methods of path compression, level compression and data compression are combined to build a simple, compact and efficient implementation of a suffix tree Based on a comparative discussion and extensive experiments, we argue that our new data structure is superior to previous methods in many practical situations

Proceedings ArticleDOI
TL;DR: In this paper, the production impact of fracture reorientation on both primary and secondary recovery schemes is addressed and strategies are presented which utilize the recent findings for both enhancing primary recovery and mitigating some common problems with secondary recovery.
Abstract: Hydraulic fracture orientation is critical to both primary and secondary oil recovery in low-permeability reservoirs. Incomplete and often overlapping drainage patterns under primary recovery, as well as inefficient sweep and premature water (or steam) breakthrough under secondary recovery are some of the common production problems that often result from hydraulic fracture reorientation. Often, hydraulic fracture orientation is measured on a few wells, and then generalized across the entire field under development. This characterization of regional fracture (stress) orientation is then assumed constant over the development life of the field. A wealth of recent observations have definitively shown that fracture (stress) orientation in low-permeability reservoirs can be profoundly affected by production activities. Hydraulic fracture reorientation has been observed on dozens of staged fracture treatments (in several fields) under both primary and secondary recovery. A summary of collected field data from three extensive field studies is presented. The production impact of fracture reorientation on both primary and secondary recovery schemes is addressed ; and strategies are presented which utilize the recent findings for both enhancing primary recovery and mitigating some common problems with secondary recovery. The discussion of reorientation mechanisms is greatly enlightened by recent data which reveals a startling correlation between observed fracture reorientation and indirect measurements of reservoir compaction.

Journal ArticleDOI
TL;DR: The type inference algorithm for the SELF language can guarantee the safety and disambiguity of message sends, and provide useful information for browsers and optimizing compilers.
Abstract: We have designed and implemented a type inference algo- rithm for the Self language. The algorithm can guarantee the safety and disambiguity of message sends, and provide useful information for browsers and optimizing compilers. Self features objects with dynamic inheritance. This construct has until now been considered incompatible with type inference because it allows the inheritance graph to change dynamically. Our algorithm handles this by deriving and solving type constraints that simultaneously dene su- persets of both the possible values of expressions and of the possible inheritance graphs. The apparent circularity is resolved by computing a global xed-p oint, in polynomial time. The algorithm has been implemented and can successfully handle the Self benchmark programs, which exist in the \standard Self world" of more than 40,000 lines of code.

Journal ArticleDOI
TL;DR: Experimental results for string matching algorithms which are known to be fast in practice show that for large alphabets and small patterns the Quick Search algorithm of Sunday is the most efficient and that for small alphABets and large patterns it is the Reverse Factor algorithm of Crochemore et al. which is themost efficient.
Abstract: We present experimental results for string matching algorithms which are known to be fast in practice. We compare these algorithms through two aspects : the number of text character inspections and the running time. These experiments show that for large alphabets and small patterns the Quick Search algorithm of Sunday is the most efficient and that for small alphabets and large patterns it is the Reverse Factor algorithm of Crochemore et al. which is the most efficient.

Journal ArticleDOI
TL;DR: In many scientific applications, arrays containing data are indirectly indexed through indirection arrays, which are a distinct class of applications that require special techniques for parallelization.
Abstract: SUMMARY In many scientific applications, arrays containing data are indirectly indexed through indirection arrays. Such scientific applications are called irregular programs and are a distinct class of applications that require special techniques for parallelization. This paper presents a library called CHAOS, which helps users implement irregular programs on distributed-memory message-passing machines, such as the Paragon, Delta, CM-5 and SP-1. The CHAOS library provides efficient runtime primitives for distributing data and computation over processors; it supports efficient index translation mechanisms and provides users high-level mechanisms for optimizing communication. CHAOS subsumes the previous PARTI library and supports a larger class of applications. In particular, it provides efficient support for parallelization of adaptive irregular programs where indirection arrays are modified during the course of computation. To demonstrate the efficacy of CHAOS, two challenging real-life adaptive applications were parallelized using CHAOS primitives: a molecular dynamics code, CHARMM, and a particle-in-cell code, DSMC. Besides providing runtime support to users, CHAOS can also be used by compilers to automatically parallelize irregular applications. This paper demonstrates how CHAOS can be effectively used in such a framework. By embedding CHAOS primitives in the Syracuse Fortran 90D/HPF compiler, kernels taken from the CHARMM and DSMC codes have been automatically parallelized.


Journal ArticleDOI
TL;DR: This work claims that the traditional implementations of strings, and often the supported functionality, are not well suited to general‐purpose use and presents ‘ropes’ or ‘heavyweight’ strings as an alternative that leads to systems that are more robust, both in functionality and in performance.
Abstract: Programming languages generally provide a ‘string’ or ‘text’ type to allow manipulation of sequences of characters. This type is usually of crucial importance, since it is normally mentioned in most interfaces between system components. We claim that the traditional implementations of strings, and often the supported functionality, are not well suited to such general-purpose use. They should be confined to applications with specific, and unusual, performance requirements. We present ‘ropes’ or ‘heavyweight’ strings as an alternative that, in our experience leads to systems that are more robust, both in functionality and in performance. Ropes have been in use in the Cedar environment almost since its inception, but this appears to be neither well-known, nor discussed in the literature. The algorithms have been gradually refined. We have also recently built a second similar, but somewhat lighter weight, C-language implementation, which is included in our publically released garbage collector distribution. We describe the algorithms used in both, and give some performance measurements for the C version.


Journal ArticleDOI
TL;DR: This model is generic because it is not based on any language or design method, it can be applied on the basis of ‘problem’ to be solved and also captures four life‐cycle phases: requirement, specification, design and programming.
Abstract: SUMMARY The maintenance of a software system requires a tool for impact analysis and the propagation of change. This paper presents a knowledge-based model for both. This model is generic because it is not based on any language or design method. Therefore, it can be applied on the basis of a ‘problem’ to be solved. It also captures four life-cycle phases: requirement, specification, design and programming. We also provide a domain-specific view that allows the dependency analysis of fine-grain objects. Two kinds of dependencies are identified: inter-phase dependencies, these are dependency relations between the objects of one phase and another; and intra-phase dependencies, these are dependency relations between the objects of the same phase. In order to validate this model, we also present a prototype based on two life-cycle phases: design and programming.

Proceedings ArticleDOI
TL;DR: A new approach in fractured reservoir characterization and simulation that integrates geomechanics, geology, and reservoir engineering is proposed and illustrated with actual oil reservoirs using a neural network to find the relationship between, reservoir structure, bed thickness and the well performance used as an indicator of fracture intensity.
Abstract: A new approach in fractured reservoir characterization and simulation that integrates geomechanics, geology, and reservoir engineering is proposed and illustrated with actual oil reservoirs. This approach uses a neural network to find the relationship between, reservoir structure, bed thickness and the well performance used as an indicator of fracture intensity. Once the relation established, the neural network can be used to forecast primary production, or for mapping the reservoir fracture intensity. The resulting fracture intensity distribution can be used to represent the subsurface fracture network. Using the fracture intensity map and fracture network, directional fracture permeabilities and fracture pore volume can be estimated via a history matching process where only two parameters are adjusted.

Proceedings ArticleDOI
TL;DR: In this paper, the authors present the first field scale measurements of in situ stress effects on coal seams and demonstrate the importance of these effects on a highly compressible reservoir such as coal is demonstrated by relating permeability and production to stress.
Abstract: This paper presents the first field scale measurements of in situ stress effects on permeability of coal seams. The importance of these effects on a highly compressible reservoir such as coal is demonstrated by relating permeability and production to stress. Well testing complications and the implications of stress toward exploitation of existing reserves and exploration for new reserves are also discussed. Additionally, comparisons of this paper`s findings to prior theoretical work, core testing, and limited field data are presented.

Proceedings ArticleDOI
TL;DR: In this paper, coupled osmotic flow with optimized shale-fluid membrane efficiencies in water-based environments is presented as a new strategy for improving wellbore stability in shales.
Abstract: Coupled osmotic flows have been studied as a means of stabilising shales exposed to water-based muds. The prime factor that governs the magnitude of chemical osmotic flow, i.e. the shale-fluid membrane efficiency, was investigated in detail. Its dependence on shale parameters, fluid parameters and external conditions was quantified. Membrane efficiency was found to increase with an increase in (hydrated) solute-to-pore-size ratio, with an increase in the shale`s high-surface area clay content and with a decrease shale permeability when increasing effective confining stress. Moreover, new drilling fluid chemistries for improving the efficiencies of low- and non-selective shale-fluid systems were identified. Induced osmotic flow with optimised shale-fluid membrane efficiencies in water-based environments is presented as a new strategy for improving wellbore stability in shales.

Proceedings ArticleDOI
W. Boom1, K. Wit1, A.M. Schulte1, S. Oedai1, J.P.W. Zeelenberg1, J.G. Maas1 
TL;DR: In this paper, a wide range of relative permeability and of saturation were probed, and several orders of magnitude of capillary number, interfacial tension and absolute permeability were covered.
Abstract: The inflow performance of gas wells in gas/condensate fields may be impaired when condensate banks form near the wellbore as a result of the pressure dropping below the dewpoint. This impairment may be alleviated to some extent, however, by the increase in condensate mobility at the prevailing conditions, which are characterised by high gas flow rates and relatively low interfacial tension, Scouting reservoir simulations indeed identified this mobility improvement to be a key uncertainty in well-deliverability forecasting for gas/condensate reservoirs. Model experiments with core material from various gas/condensate reservoirs in Europe and the Middle East were therefore conducted at ambient conditions to assess the degree to which the mobility of condensate could be improved by the applied flow conditions. A wide range of relative permeability and of saturation were probed, and several orders of magnitude of capillary number, interfacial tension and absolute permeability were covered. In all cases, the results reveal a truly significant mobility increase. Moreover, it was experimentally demonstrated that the key parameter controlling this effect is the capillary number and not the interfacial tension alone. The dynamic relative permeabilities that were obtained from our experiments are currently being used for history-matching well tests with equation-of-state reservoir simulations.


Proceedings ArticleDOI
TL;DR: Three possible mechanisms of near-wellbore effects : perforation phasing misalignment-induced rock pinching, perforations pressure drop, and fracture reorientation (deviation tortuosity) - and their implementation in a numerical fracture simulator are described and a method proposed by which to distinguish them is proposed.
Abstract: The high near-wellbore pressure drop which has frequently been reported in fracture treatments is indicative of ineffective communication between the wellbore and the fracture. Although numerous observations of such effects have been published, few attempts have been made to understand them. This paper describes three possible mechanisms of near-wellbore effects : perforation phasing misalignment-induced rock pinching, perforation pressure drop, and fracture reorientation (deviation tortuosity) - and their implementation in a numerical fracture simulator. Typical signatures of all three effects in fracture treatment records are shown, and a method proposed by which to distinguish them. Perforation phasing misalignment has been identified as a cause of near-wellbore restriction. Because the fracture does not always initiate at the perforation, the fluid must communicate with the fracture through a narrow channel (micro-annulus) around the casing. The paper describes this pinching effect and shows that it is related to the contact stress between the cement and the formation. Perforation pressure drop and deviation tortuosity, which have previously been proposed by other authors, have also been modeled. They have been incorporated in the simulator, together with the phasing misalignment effect, to allow investigation of the differences in response between the different mechanisms. Results of simulating the effects of erosion of near-well effects on treating pressure are also shown.

Proceedings ArticleDOI
TL;DR: In this paper, the authors describe the Dual-Burst* PNC measurement scheme for the RST* Reservoir Saturation Tool, as well as the algorithm methodology for determining corrected sigma, porosity, borehole fluid salinity, log quality and other associated outputs.
Abstract: Recently, two new slim through-tubing carbon-oxygen tools were introduced and described in the literature. In addition to carbon-oxygen water-saturation capability, these new instruments can provide, during the same trip, pulsed neutron capture (PNC) measurements (formation sigma, porosity, borehole fluid salinity, etc.). The answers have improved accuracy and precision compared to prior logging instruments. This paper describes the Dual-Burst* PNC measurement scheme for the RST* Reservoir Saturation Tool, as well as the algorithm methodology for determining corrected sigma, porosity, borehole fluid salinity, log quality and other associated outputs. Also documented are database measurements on which the algorithms are based, accuracy benchmarks in industry-standard calibration facilities, and precision (repeatability) comparisons. Log examples are presented. Diffusion, borehole and lithology effects must be considered when transforming observed quantities such as decay times or near-to-far ratios to actual physical quantities. However, these effects are difficult to account for in direct analytical approaches over the entire range of oilfield conditions. Therefore, a multidimensional dynamic parameterization technique, based on an extensive set of laboratory measurements, has been developed and refined. This technique keeps the order of parameters low, resulting in a well-behaved response both inside and outside the range spanned by the database. The supporting database for each tool includes over 1000 measured points augmented with approximately 400 modeled points spanning different lithologies, porosities, borehole sizes, casing sizes and weights, formation fluid salinities, and borehole fluid salinities typically encountered in the oilfield. Also reported are the sigma and porosity accuracy benchmark measurements made in the industry-standard calibration facilities (EUROPA facility, Aberdeen, Scotland and API test pits, University of Houston, Houston, Texas, USA). Precision (repeatability) is also compared to prior PNC logging instruments, demonstrating logging speeds typically several times faster for comparable precision.

Journal ArticleDOI
TL;DR: The software engineering principles and the project management techniques used in developing Myriad and the lessons learned are presented, believing these lessons would be useful for practitioners who wish to develop a similar system.
Abstract: SUMMARY A key problem in providing ‘enterprise-wide’ information is the integration of databases that have been independently developed. An important requirement is to accommodate heterogeneity and maintain the autonomy of component databases. Myriad is a federated database prototype developed at the University of Minnesota, to provide a testbed for investigating alternatives in architecture and algorithms for database integration, query processing and optimization, and concurrency control and recovery. The system incorporates our group’s research results in these areas. This paper describes our experiences in the design and implementation of Myriad, and in the project management. Special emphasis is given to discussing design alternatives and their impact on Myriad. This paper also presents the software engineering principles and the project management techniques we used in developing Myriad and the lessons we learned. We believe these lessons would be useful for practitioners who wish to develop a similar system. Handling heterogeneity and autonomy were prime objectives throughout the prototyping effort. We are convinced that a prototype federated database is an important infrastructural requirement for the overall goal of ‘enterprise-integration’, and believe Myriad to be a significant contribution towards this.