scispace - formally typeset
Search or ask a question

Showing papers in "Software - Practice and Experience in 1982"


Journal ArticleDOI
TL;DR: This algorithm has been used successfully as a component of a program for analysing chemical reactions and enumerating the bond changes which have taken place and its use in the problem of finding the maximal common subgraph of two graphs is described.
Abstract: SUMMARY Backtrack algorithms are applicable to a wide variety of problems. An efficient but readable version of such an algorithm is presented and its use in the problem of finding the maximal common subgraph of two graphs is described. Techniques available in this application area for ordering and pruning the backtrack search are discussed. This algorithm has been used successfully as a component of a program for analysing chemical reactions and enumerating the bond changes which have taken place. Graph matching algorithms have been extensively studied for use in the analysis of chemical molecules and chemical reactions.' -3 The methods discussed in the present paper were developed for use in a program for classifying chemical reactions. Given structural descriptions of the molecules present before and after the reaction, the program identifies the chemical bonds changed by the reaction. The present paper is not concerned with chemical details, which are given el~ewhere,~~ ' and instead concentrates on the underlying abstract problem of determining the maximal common subgraph of two graphs. Relatively efficient solutions to this fundamental problem may find practical applications in many areas besides chemistry. For example, various police forces have attempted to match descriptions of crimes: when the descriptions, which may be in the form of graphs, are sufficiently similar, then the same gang may have committed the crimes. This and other such applications are not yet clearly established because the maximal common subgraph problem has hitherto appeared to be combinatorially prohibitive. As well as having the prospect of usefulness in many areas, the present work provides a case-study in combinatorial computing. Levi6 describes an algorithm for deriving the maximal common subgraph of two graphs and the use of this algorithm in comparing molecular structures has been discussed by Cone.7 Levi's definition of the term 'maximal common subgraph' is inappropriate for the reaction classification problem. The present paper uses a more appropriate and generally useful measure of the similarity between two graphs, for which Levi's methods do not work. We will introduce a maximal common subgraph algorithm that uses a backtrack search' as its basic component. Backtrack search, if used alone in a problem of this nature, is grotesquely inefficient. If, however, other techniques can be found for

338 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe a software subsystem that can be added to each of a set of physically interconnected UNIX or UNIX look-alike systems, so as to construct a distributed system which is functionally indistinguishable at both the user and the program level from a conventional single-processor UNIX system.
Abstract: In this paper we describe a software subsystem that can be added to each of a set of physically interconnected UNIX or UNIX look-alike systems, so as to construct a distributed system which is functionally indistinguishable at both the user and the program level from a conventional single-processor UNIX system. The techniques used are applicable to a variety and multiplicity of both local and wide area networks, and enable all issues of inter-processor communication, network protocols, etc., to be hidden. A brief account is given of experience with such a distributed system, which is currently operational on a set of PDPlls connected by a Cambridge Ring. The final sections compare our scheme to various precursor schemes and discuss its potential relevance to other operating systems.

195 citations




Journal ArticleDOI
TL;DR: An overview of the language mechanisms is given, some of the major design decisions are discussed and one implementation of SR is described.
Abstract: SR is a new language for programming software containing many processes that execute in parallel. The language allows an entire software system that controls a potentially large collection of processors to be programmed as an integrated set of software modules. The key language mechanisms are resources, operations and input statements. The language supports separate compilation, type abstraction, and dynamic communication links; it also contains novel treatments of arrays and procedures. This paper gives an overview of the language mechanisms, discusses some of the major design decisions and describes one implementation.

80 citations


Proceedings ArticleDOI
TL;DR: In this article, the authors show that vertical conformance in a WAG flood can be greatly improved by use of injection rates that are high relative to well spacing, and that the resultant oil recovery is primarily a function of a single dimensionless parameter, depending on the viscosity density ratio.
Abstract: Oil recovery by the injection of water alternately with gas (WAG flooding) is often limited by gravity segregation, which causes the injected gas to rise to the top of the formation and water to migrate to the bottom. This segregation results in a miscible flood in only a thin layer at the top of the formation, whereas the remainder is waterflooded. However, vertical conformance in a WAG flood can be greatly improved by use of injection rates that are high relative to well spacing. The degree of improvement and, hence, the resultant oil recovery is primarily a function of a single dimensionless parameter, depending on the viscosity density ratio.

79 citations


Proceedings ArticleDOI
Kishore K. Mohanty1
TL;DR: In this article, a pore-level, three-dimensional network model is presented, which incorporates the structure of porous rock, porelevel fluid displacement mechanisms and saturation history.
Abstract: The pores a phase occupies and the way those pores are interconnected are shown here to determine the macroscopic transport properties of that phase. Such transport properties are dispersivity, capacitance, relative permeability and capillary pressure. A pore-level, three-dimensional network model is presented in this paper which incorporates the structure of porous rock, pore-level fluid displacement mechanisms and saturation history. The model calculates pore-level distribution of fluids, and then computes steady-state transport properties of such distributions. The results are compared with experimentally obtained values and illustrate the nature of fluid flow and the mechanisms of mixing in strongly wetting rocks.

72 citations


Journal ArticleDOI
Brian W. Kernighan1
TL;DR: PIC is a language for specifying pictures so that they can be typeset as an integral part of a document preparation system.
Abstract: PIC is a language for specifying pictures so that they can be typeset as an integral part of a document preparation system. The basic objects in PIC are boxes, lines, arrows, circles, ellipses, arcs and splines, which may be placed at positions specified absolutely or in terms of previous objects, and labeled with arbitrary text. The example below illustrates the general capabilities of the language. Most of the effort in designing PIC has gone into making it possible to specify the sizes and positions of objects with minimal use of absolute coordinates. This paper describes PIC, with primary emphasis on those aspects of the language that make it easy to use. The paper was typeset using PIC.

71 citations


Proceedings ArticleDOI
J.C. Melrose1
TL;DR: In this article, the limits under which stable thin aqueous wetting films can exist are examined in more detail and it is shown that there exists a lower limit to the pore size in which thin wetting film will occur and that this limiting size depends on the salinity of the brine.
Abstract: It has recently been noted that the diffuse electrical double layers which exist at the oil/ brine and the mineral/brine interfaces in sandstone reservoirs will in many cases be quite similar with respect to electric charge and potential. Extremely thin aqueous wetting films separating such interfaces are thus stabilized by the electrostatic repulsive force acting between the double layers. In the present paper the limits under which stable thin films can exist are examined in more detail. It is shown that there exists a lower limit to the pore size in which thin wetting films will occur and that this limiting size depends on the salinity of the brine. It is also shown that whether this limiting pore size becomes a factor in determining the wettability of the rock/brine/oil system depends on the pore size distribution curve and on the initial brine saturation of the rock. Geological factors which may come into play in establishing the initial water saturation in a given case are discussed.

59 citations


Proceedings ArticleDOI
TL;DR: In this article, tracer displacements were run during steady-state, two-phase flow, where tracers were added to both the wetting and nonwetting phases, and the results of all these experiments were used to formulate a mechanistic picture of steady, twophase flow in strongly wetted porous media.
Abstract: Tracer displacements were run during steadystate, two-phase flow, where tracers were added to both the wetting and nonwetting phases. Such experiments were conducted in the primary drainage, imbibition, and secondary drainage cycles over a wide range of fractional flows. The effluent tracer profiles from these experiments were fit with a four-parameter, capacitance-dispersion model, which divides each phase into three fractions: flowing, dendritic (branching), and isolated. Mixing in the flowing fraction is described by a dispersion coefficient, and communication between the flowing and dendritic fraction is represented by a single mass transfer coefficient. This model is shown to adequately fit tracer effluent data over a wide range of conditions. In some areas the model is nonunique--thus, more than one solution fits the experimental data equally well. Additional experiments were conducted to differentiate mechanisms in these regions. The effects of core wettability were investigated and shown to be important. The results of all of these experiments were used to formulate a mechanistic picture of steady, two-phase flow in strongly wetted porous media.

56 citations


Proceedings ArticleDOI
TL;DR: In this paper, the authors presented new ideas for the interpretation of pressure transient tests for well in naturally fractured reservoirs, based on the transient matrix flow model formulated by de Swaan.
Abstract: New ideas are presented for the interpretation of pressure transient tests for wells in naturally fractured reservoirs. This work is based on the transient matrix flow model formulated by de Swaan. The differences between this model and the Warren and Root model occur during the transition flow period. It is demonstrated that the behavior of a naturally fractured reservoir can be correlated by using three dimensionless parameters. It is established that regardless of matrix geometry the transition period might exhibit a straight line whose slope is equal to half the slope of the classical parallel semilog straight lines, provided the transient matrix linear flow is present. In addition, information is provided on the estimation of fracture area per unit matrix volume or matrix parameters from the transition period semilog straight line. It is shown that matrix geometry might be identified when pressure data are smooth. Field examples are included to illustrate the application and the validity of the theoretical results of this study.


Proceedings ArticleDOI
TL;DR: In this article, a method for analyzing interference tests in reservoirs with double porosity behavior is presented; it applies to both naturally fractured reservoirs and multilayered reservoirs with sufficiently high permeability contrast between layers.
Abstract: A systematic approach for analyzing interference tests in reservoirs with double porosity behavior is presented; it applies to both naturally fractured reservoirs and multilayered reservoirs with sufficiently high permeability contrast between layers. Type curves are presented for the pressure response at an observation well, the active well being produced at either constant flow rate or constant pressure. They are derived from two models with the assumptions of pseudo-steady state or transient interporosity flow regime. The distinctive specific features they exhibit are clearly identified and quantitatively related to the model parameters. An interpretation method, based on type curve matching, is proposed: after selection of the most appropriate model, and identification of the successive flow regimes, the double porosity behavior of the reservoir is characterized and pertinent parameters are evaluated: flow conductivity kh, interporosity flow parameter ..lambda.., and storativity (/phi/Vc /SUB t/ )h segregation throughout the reservoir. Actual field examples are discussed to illustrate the method.

Journal ArticleDOI
TL;DR: It is shown that the Boyel-Moore algorithm is extremely efficient in most cases and that, contrary to the impression one might get from the analytical results, the Knuth-Morris-Pratt algorithm is not significantly better on the average than the straightforward algorithm.
Abstract: Three string matching algorithms—straightforward, Knuth-Morris-Pratt and Boyer-Moor—re examined and their time complexities discussed. A comparison of their actual average behaviour is made, based on empirical data presented. It is shown that the Boyel-Moore algorithm is extremely efficient in most cases and that, contrary to the impression one might get from the analytical results, the Knuth-Morris-Pratt algorithm is not significantly better on the average than the straightforward algorithm.

Proceedings ArticleDOI
John W. Minear1
TL;DR: In this paper, two-phase porous media models are used to estimate the effects of shale and clay on clean sedimentary formations and a particular model is used to calculate the effect of laminated and structural shale and dispersed clay on compressional and shear wave slowness.
Abstract: Acoustic velocity, or porosity derived from the velocity, is often corrected for shale or clay effects. These corrections are largely empirical. Two-phase porous media models are applicable to sedimentary formations and provide a new approach to estimating the effects of shale and clay on clean formations. One particular model is used to calculate the effects of laminated and structural shale and dispersed clay on compressional and shear wave slowness. The calculations suggest that laminated and structural shale have about the same effect. Dispersed clay may have negligible effect. Shale and clay increase shear wave slowness more than compressional wave slowness.


Proceedings ArticleDOI
TL;DR: In this article, a series of pore structures were prepared by controlled sintering of model sediments of three different grain shapes and two grain sizes, and the morphology of each core was found by serial sectioning and recording all the relevant dimensions and connections.
Abstract: Fluid distributions and flow through the porespace of reservoir rock depend not only on pore sizes and shapes but also local pore connectivity. This can be characterized in terms of the network concept of coordination but is more appropriately described by the topological invariant called genus, which is the basic measure of the multiplicity of paths through the pore network. The larger the genus per unit volume, the larger the number of alternative paths available for fluid connection and flow. Genus is the number of distinct holes in a holey structure such as porous rock. Model rocks of a series of pore structures were prepared by controlled sintering of model sediments of three different grain shapes and two grain sizes. The morphology of each core was found by serial sectioning and recording all the relevant dimensions and connections. From features on representative two-dimensional sections the statistics of genus and other relevant properties of the threedimensional structures were calculated by means of mathematical relations known for random structures. The measured residual saturations of nonwetting liquid in model rocks synthesized from grains of the same size but different shapes correlate remarkably well with genus per unit volume. Pore topology, heretofore unstudied, ismore » as crucial as pore geometry in determining residual nonwetting saturations.« less

Proceedings ArticleDOI
TL;DR: This paper deals with the development of a general algorithm for phase envelope construction, which extends the pressure-temperature diagram construction method of Michelsen to the generation of phase envelopes on pressure-composite, temperature-com composition and composition-composition diagrams.
Abstract: This paper deals with the development of a general algorithm for phase envelope construction. It extends the pressure-temperature diagram construction method of Michelsen to the generation of phase envelopes on pressure-composition, temperature-composition and composition-composition diagrams. The bubble point and dew point curves are traced in one pass, and an estimate of the critical point is also given. The loci of mixtures having a constant vapor-liquid split (e.g. 25% vapor mole fraction or volume fraction) can also be generated. The algorithm selects internally the primary variables and the points on the diagram to enhance the convergence. Results are presented for simple mixtures, typical reservoir oil-CO/sub 2/ mixtures and a gas condensate. Problems associated with a discontinuity in the phase envelope because of a three-phase region are also presented, and it is shown how an analysis of the Gibbs energy surface can help in discarding unrealistic results.


Proceedings ArticleDOI
Dale U. von Rosenberg1
TL;DR: In this paper, a method of local mesh refinement was developed in which additional nodes are used only in regions where they are necessary, and the extra mesh lines are not extended across the entire simulator as in the usual current practice.
Abstract: A method of local mesh refinement has been developed in which additional nodes are used only in regions where they are necessary. The extra mesh lines are not extended across the entire simulator as in the usual current practice. Refinement of a two-dimensional element, herein defined as the rectangle formed by mesh lines with nodes at the vertices, consists of bisecting it in each coordinate direction, thus dividing the original element into four smaller, but similar ones, and creating five nodes. The resulting smaller elements may themselves be refined by the same procedure. An element must be refined if two adjacent elements of the same size have been refined or if one smaller adjacent element has been refined. Three configurations of five nodes each which arise in current refinement methods also appear in the new procedure. In addition, a configuration of six nodes also occurs. Finite difference analogs for all configurations were developed. An extensive study of local mesh refinement near a well was made for the repeated five spot in a homogeneous reservoir with unit mobility ratio. Results of the finite difference solutions were compared with the analytic solution. Generally, it was found there should be at least twomore » consecutive increments of the same size between changes in increment size. When this practice is followed, a local minimum in truncation error occurs at each node where there is a change in increment size. The more accurate solution obtained in the vicinity of the well by use of smaller increments there results in a more accurate solution also in the coarser unrefined mesh.« less

Journal ArticleDOI
Narain H. Gehani1
TL;DR: A comparison of the two specifications shows that although informal specifications are easier to read, the formal specifications are clearer and specify operation domains precisely, define the interaction between the operations, and show the incompleteness of the informal specifications.
Abstract: Formal specifications (algebraic) are given for an informally specified small subsystem of the Change Management Automatic Build System. A comparison of the two specifications shows that although informal specifications are easier to read, the formal specifications are clearer, specify operation domains precisely, define the interaction between the operations, show the incompleteness of the informal specifications and are devoid of implementation details. The formal specifications pointed to the need of a function not in the subsystem whose inclusion would improve the system design. This inclusion is now being considered. However, the use of algebraic specifications requires practice and experience. Although the formal specification of large systems is somewhat impractical at the moment, experience in using formal specifications can lead to better informal specifications.

Proceedings ArticleDOI
TL;DR: In this paper, the probability of a model for the models suggested as most likely by the reservoir geology is estimated from response equations for each model and combined in accordance with the probability for each respective model.
Abstract: Well log data has long been used to provide lithological descriptions of complex formations. Historically, most of the approaches used have been restrictive because they assumed fixed, known, and distinct lithologies for specified zones. The approach described in this paper attempts to alleviate this restriction by estimating the ''probability of a model'' for the models suggested as most likely by the reservoir geology. Lithological variables are simultaneously estimated from response equations for each model and combined in accordance with the probability of each respective model. The initial application of this approach has been the estimation of calcite, quartz, and dolomite in the presence of clays, feldspars, anhydrite, or salt. Estimations were made by using natural gamma ray spectra, photoelectric effect, bulk density, and neutron porosity information. For each model, response equations and parameter selections are obtained from the thorium vs potassium crossplot and the apparent matrix density vs apparent volumetric photoelectric cross section crossplot. The thorium and potassium response equations are used to estimate the volumes of clay and feldspar. The apparent matrix density and volumetric cross section response equations can then be corrected for the presence of clay and feldspar. A test ensures that the clay correction lies withinmore » the limits for the assumed lithology model. Results are presented for varying lithologies. For one test well, 6,000 feet were processed in a single pass, without zoning and without adjusting more than one parameter pick. The program recognized sand, limestone, dolomite, clay, feldspar, anhydrite, and salt without analyst intervention.« less

Proceedings ArticleDOI
TL;DR: Rio Blanco Oil Shale Company (RBOSC) has designed, constructed and processed two Modified In Situ (MIS) retorts at Tract C-a using newly developed techniques for rubbling, ignition and operation through surface drill holes as mentioned in this paper.
Abstract: Rio Blanco Oil Shale Company (RBOSC) has designed, constructed and processed two Modified In Situ (MIS) retorts at Tract C-a using newly developed techniques for rubbling, ignition and operation through surface drill holes. Retort Zero was ignited in October 1980, nominal dimensions 30' x 30' x 165' (9 x 9 x 50 meters), oil yield was 1,876 barrels (298,300 liters). Retort One was ignited June 1981, nominal dimensions 60' x 60' x 400' (18 x 18 x 122 meters), oil yield was 24,444 barrels (3,886,000 liters). RBOSC's MIS method was proved very successful; ignition was achieved in less than 36 hours, average front advance rates were 2.7 and 3.0 ft/day (.8 and .9 meter/day) and oil yields were 68 percent of Fischer Assay, close to model predictions for both retorts. This paper describes conceptually the MIS method developed by RBOSC and give the process results. Details of how the results were achieved are proprietary.

Journal ArticleDOI
TL;DR: A semi‐incremental algorithm that reduces the delays and is, moreover more efficient than the mark‐scan algorithm from which it is derived, and was used as part of a Lisp implementation on an LSI‐11/03 and found to behave well.
Abstract: Lisp is restricted in application because of d the unpredictable delays introduced by garbage collection. Incremental garbage collectors eliminate the delays, but are less efficient overall. We present a semi-incremental algorithm that reduces the delays and is, moreover more efficient than the mark-scan algorithm from which it is derived. The mark-scan algorithm is explained it consists of a mark phase followed by a sweep phase. The sweep phase can be performed incrementally, but the mark phase cannot. If this modification is made, our semi-incremental algorithm is derived. Using the new algorithm the delay on garbage collection is proportional to the amount of heap actually in use, not the size of the heap. Allocating a cell takes a variable amount of time, depending on the proportion of the heap in use. Comparing the number of operations in the old and new algorithms, we see that the new algorithm is more efficient. The new algorithm was used as part of a Lisp implementation on an LSI-11/03 and found to behave well.

Proceedings ArticleDOI
TL;DR: In this paper, a new practical method has been developed to estimate subsurface fracture orientation based on an analysis of solid earth tidal strains, which is a passive method which has no depth limitation.
Abstract: A new practical method has been developed to estimate subsurface fracture orientation based on an analysis of solid earth tidal strains. The tidal strain fracture orientation technique is a passive method which has no depth limitation. The orientation of either natural or hydraulically stimulated fractures can be measured using either new or old static observation wells. Estimates for total compressiblity and areal interconnected porosity can also be developed for reservoirs with matrix permeability using a combination of tidal and barometric strain analysis. Data acquisition requirements are modest and no surface instrumentation beyond pressure monitoring equipment is required. The tidal method has been demonstrated successfully at a naturally fractured geothermal field in S.E. Idaho and at an oil field in W. Canada. Both case studies are described. 51 references.

Proceedings ArticleDOI
TL;DR: The development of a kinetic model of the in-situ combustion process from data obtained from Thermogravimetric analysis (TGA) and Differential Scanning Calorimetry and results were in good agreement with predicted values of this work.
Abstract: This paper describes the development of a kinetic model of the in-situ combustion process from data obtained from Thermogravimetric analysis (TGA) and Differential Scanning Calorimetry A crude oil from Iola County, Kansas, was studied. This oil has a gravity of 19.8/sup 0/ API and a viscosity of 222 cp at 38/sup 0/C and 89 cp at 54/sup 0/C. Data from TGA experiments on Iola crude heated in the presence of air were obtained. Analysis of combustion gases from the TGA was done by gas chromatography to determine the nature of the reactions occurring during the TGA runs. Main reactions/transitions identified were distillation, low temperature oxidation, cracking and combustion. A kinetic model of the TGA process was developed from thermograms. The model is based on the weight of crude oil components which undergo main reactions/transitions. The kinetic model was used to predict the percentage of the crude oil deposited as fuel in the combustion process as well as the time for consumption of the fuel within the combustion zone. By estimating the length of the combustion zone from an energy balance across the region where combustion reactions take place in a combustion tube, it was possible to predict the rate atmore » which fuel was consumed during the process. A combustion tube run was made with silica sand containing 8.0% by weight Iola crude oil in order to test the prediction by the kinetic model derived from thermal analysis techniques. Results of the combustion tube run were in good agreement with predicted values of this work and another run from the literature which was performed on the same crude oil.« less

Journal ArticleDOI
TL;DR: Pascal programs, written by graduate students and faculty members, have been statically analysed to provide a better understanding of how the language is ‘really’ used.
Abstract: More than 120,000 lines of Pascal programs, written by graduate students and faculty members, have been statically analysed to provide a better understanding of how the language is ‘really’ used. The analysis was done within twelve distinct contexts to discover differences in usage patterns among the various contexts. For example, it was found that 47 per cent of the operands in arguments lists were constants. The results are displayed as tables of frequency counts which show how often each construct is used within a context. Also, we have compared our findings to the results from studies of other languages, such as FORTRAN, SAL and XPL.


Journal ArticleDOI
TL;DR: A hashing function which minimizes the number of collisions in a signature is described, and experimental results for two signature lengths with both a text file and a program file are given.
Abstract: This paper studies the use of text signatures in string searching. Text signatures are a coded representation of a unit of text formed by hashing substrings into bit positions which are, in turn, set to one. Then instead of searching an entire line of text exhaustively, the text signature may be examined first to determine if complete processing is warranted. A hashing function which minimizes the number of collisions in a signature is described. Experimental results for two signature lengths with both a text file and a program file are given. Analyses of the results and the utility and application of the method conclude the discussion.

Journal ArticleDOI
TL;DR: The scheme discussed here combines the two techniques to allow comparison trees with entries that are branch tables to adapt well to certain instances of case statements.
Abstract: Pascal case statements can be compiled using a variety of methods, including comparison trees and branch tables. The scheme discussed here combines the two techniques to allow comparison trees with entries that are branch tables. The use of a combination of the two techniques is shown to adapt well to certain instances of case statements. Extensions to the standard case statement also require such a scheme to obtain an efficient implementation.