scispace - formally typeset
Search or ask a question

Showing papers in "The Computer Journal in 1982"



Journal ArticleDOI
TL;DR: 'The belief that the tester is routinely able to determine whether or not the test output is correct is the oracle assumption.
Abstract: It is widely accepted that the fundamental limitation of using program testing techniques to determine the correctness of a program is the inability to extrapolate from the correctness of results for a proper subset of the input domain to the program's correctness for all elements of the domain. In particular, for any proper subset of the domain there are infinitely many programs which produce the correct output on those elements, but produce an incorrect output for some other domain element. None the less we routinely test programs to increase our confidence in their correctness, and a great deal of research is currently being devoted to improving the effectiveness of program testing. These efforts fall into three primary categories: (1) the development of a sound theoretical basis for testing; (2) devising and improving testing methodologies, particularly mechanizable ones; (3) the definition of accurate measures of and criteria for test data adequacy. Almost all of the research on software testing therefore focuses on the development and analysis of input data. In particular there is an underlying assumption that once this phase is complete, the remaining tasks are straightforward. These consist of running the program on the selected data, producing output which is then examined to determine the program's correctness on the test data. The mechanism which checks this correctness is known as an oracle, and the belief that the tester is routinely able to determine whether or not the test output is correct is the oracle assumption.' • 2

582 citations


Journal ArticleDOI
TL;DR: Approximation algorithms are given where the solutions are achieved with heuristic search methods and test results are presented to support the feasibility of the methods.
Abstract: The following two-dimensional bin packing problems are considered. (1) Find a way to pack an arbitrary collection of rectangular pieces into an open-ended, rectangular bin so as to minimize the height to which the pieces fill the bin. (2) Given rectangular sheets, i.e. bins of fixed width and height, allocate the rectangular pieces, the object being to obtain an arrangement that minimizes the number of sheets needed. Approximation algorithms are given where the solutions are achieved with heuristic search methods. Test results are presented to support the feasibility of the methods.

132 citations


Journal ArticleDOI
TL;DR: Book of practical reliability engineering, as an amazing reference becomes what you need to get, and book, as a source that may involve the facts, opinion, literature, religion, and many others are the great friends to join with.
Abstract: New updated! The latest book from a very famous author finally comes out. Book of practical reliability engineering, as an amazing reference becomes what you need to get. What's for is this book? Are you still thinking for what the book is? Well, this is what you probably will get. You should have made proper choices for your better life. Book, as a source that may involve the facts, opinion, literature, religion, and many others are the great friends to join with.

110 citations


Journal ArticleDOI

107 citations


Journal ArticleDOI
TL;DR: Six major approaches to systems analysis are identified and General Systems Theory is included as an approach because of its important influence on systems thinking in general and because of the contribution it has made to almost all the other identified approaches.
Abstract: The discipline of systems analysis is still very young and in common with most other emerging disciplines it occasionally enters periods of radical self examination and re-thinking. The authors feel that we are in the midst of such a phase at present; new ideas abound, arguments rage, and the development of technology is a powerful impetus to the re-examination of ideas. The reason for the current turmoil in systems analysis is the emergence over the past few years of a number of new approaches or methodologies. These approaches have generally originated as academic ideas and been taken up and modified in the practising world of systems analysis. Thus there exists a confusing array of approaches. It is the purpose of this paper to examine some of the more fundamental approaches and to attempt to classify them. It is the authors' view that the approaches are not simple alternatives, but that they seek to do different things. The authors have identified six major approaches to systems analysis: (i) General Systems Theory Approach; (ii) Human Activity Systems Approach; (iii) Participative (Socio technical) Approach; (iv) Traditional (NCC, etc.) Approach; (v) Data Analysis Approach; (vi) Structured Systems (Functional) Approach. Except for the General Systems Theory Approach they are all used to some extent in the industry today. General Systems Theory is included as an approach because of its important influence on systems thinking in general and because of the contribution it has made to almost all the other identified approaches.

106 citations


Journal ArticleDOI
TL;DR: It is shown that structured schemas always require an increase in space-time requirements, and it is suggested that this increase can be used as a complexity measure for the original schema.
Abstract: A method is presented for converting unstructured program schemas to strictly equivalent structured form. The predicates of the original schema are left intact with structuring being achieved by the duplication of the original decision nodes without the introduction of compound predicate expressions, or, where possible, by function duplication alone. It is shown that structured schemas must have at least as many decision nodes as the original unstructured schema, and must have more when the original schema contains branches out of alternation constructs. The structuring method allows the complete avoidance of function duplication, but only at the expense of decision node duplication. It is shown that structured schemas always require an increase in space-time requirements, and it is suggested that this increase can be used as a complexity measure for the original schema.

66 citations



Journal ArticleDOI

48 citations


Journal ArticleDOI
TL;DR: Any database, no matter how complex, can be represented as a set of binary-relationships, and a structure which can store such binary- Relationships is logically sufficient as the storage mechanism for a general purpose database management system.
Abstract: Any database, no matter how complex, can be represented as a set of binary-relationships. Consequently, a structure which can store such binary-relationships is logically sufficient as the storage mechanism for a general purpose database management system. For certain applications, the advantages of using such a structure would appear to outweigh the disadvantages. Surprisingly, however, very few systems have been built which use a binary-relational storage structure. The main reason would appear to be the difficulty of implementing such structures efficiently.

46 citations



Journal ArticleDOI
Jack E. Bresenham1
TL;DR: This paper describes a simple algorithm to incorporate both run length and repeated pattern encoding for step sequence compaction, and the similarity in form of the repetitive loop used to generate either runs or single steps and either full lines or periodic patterns.
Abstract: Raster devices, such as digital plotters, CRT or plasma panel displays, and matrix or ink jet printers, represent 'straight' lines in quantized fashion as a sequence of unit axial and unit diagonal steps. Dichotomons run lengths and periodic repetitive patterns in these incremental or digital lines provide a basis by which the step sequences for quantized lines can be treated in compressed form for storage or transmission. Earnshaw recently published a paper describing an investigation of two compaction alternatives encoding either run lengths or repeated patterns. This paper describes a simple algorithm to incorporate both run length and repeated pattern encoding for step sequence compaction. Also illustrated is the similarity in form of the repetitive loop used to generate either runs or single steps and either full lines or periodic patterns; initial parameter values differ, but the subsequent iterative process is identical.

Journal ArticleDOI
TL;DR: In this paper, parallel algorithms suitable for the iterative solution of large sets of linear equations are developed based on the well known Gauss Seidel and SOR methods in both synchronous and asynchronous forms.
Abstract: In this paper, parallel algorithms suitable for the iterative solution of large sets of linear equations are developed. The algorithms based on the well known Gauss Seidel and SOR methods are presented in both synchronous and asynchronous forms. Results obtained using the M.I.M.D. computer at Loughborough University are given, for the model problem of the solution of the Laplace equation within the unit square.

Journal ArticleDOI
TL;DR: Two new dynamic hashing schemes for primary key retrieval are studied, related to those of Scholl, Litwin and Larson, and have certain performance advantages over earlier schemes.
Abstract: In this paper, we study two new dynamic hashing schemes for primary key retrieval. The schemes are related to those of Scholl, Litwin and Larson. The first scheme is simple and elegant and has certain performance advantages over earlier schemes. We give a detailed mathematical analysis of this scheme and also present simulation results. The second scheme is essentially that of Larson. However, we have made a number of changes which simplify his scheme.

Journal ArticleDOI
TL;DR: The problem of random sampling occurs in many different contexts and Goodman and Hedetniemi give and analyse four sampling algorithms for this case, called SELECT, which needs an O(m) running time for actual sampling, O(n)running time for preprocessing and O(«) storage space.
Abstract: The problem of random sampling occurs in many different contexts. For example, we may wish to study experimentally the behaviour of a new data structure for searching. Then the easiest way is to generate a set of data elements, construct the corresponding structure and then perform searching of some elements belonging (or not belonging) to the data structure. A usual approach is to use a set of data elements where the elements are randomly sampled from a universal set. If we allow multiple occurrences of elements (sampling with replacement) we have no difficulties: we repeat m times a step where each of the possible n elements is chosen with equal probability 1/n (m being the size of the final multiset sample). If we demand in contrast to the above that each element occurs at most once in the sample (random sampling without replacement) the sampling may be very timeand space-consuming. Goodman and Hedetniemi give and analyse four sampling algorithms for this case.' The most effective of those, called SELECT, needs an O(m) running time for actual sampling, O(n) running time for preprocessing and O(«) storage space. The significance of an algorithm of this kind becomes evident if we recall a result of Ref. 1: if we sample elements with replacement and accept only those which have not yet been selected, then for finding m different elements we have on average to sample n-22=n-m+i 1/^ elements. The value of this expression may be very large in comparison to m. In addition, the decision whether or not a new element is really accepted demands sorting or O(n) storage. In the following we give an algorithm using the basic idea of SELECT but consuming only O(m) storage. The running time of the algorithm is on average proportional to m and in the worst case proportional to m. In addition, a version demanding O(/w log m) time, both on average and in the worst case, is pointed out.

Journal ArticleDOI
TL;DR: By making the moving direction of each disc explicit in the representation, a bit-string so constructed can be used to drive the Tower of Hanoi algorithm.
Abstract: By making the moving direction of each disc explicit in the representation, a bit-string so constructed can be used to drive the Tower of Hanoi algorithm. The behaviour of disc moves is further analysed based on the bit-string representation. It has been shown that the bit-string for moving n discs can be used to generate successively the Gray codes of n bits.

Journal ArticleDOI
TL;DR: The Birmingham and Loughborough Electronic Network Development (BLEND) project as mentioned in this paper was a three-year experimental project to explore and evaluate alternative forms of user communication through an electronic journal and information network.
Abstract: This article describes a three-year experimental programme organised jointly by the two Universities as the Birmingham and Loughborough Electronic Network Development (BLEND). The aims are to assess the cost, efficiency, and subjective impact of such a system, and to explore and evaluate alternative forms of user communication through an electronic journal and information network. Using a host computer at Birmingham University, a community of initially about 50 scientists (the Loughborough Information Network Community—LINC) will be connected through the public telephone network to explore various types of electronic journal. The concept of the electronic journal involves using a computer to aid the normal procedures whereby an article is written, refereed, accepted, and “published.” The subject of this experimental programme will be “Computer Human Factors.” Each member will contribute at least one research article and one shorter note in each year of the project, and will also use other forms of communication such as newsletter, annotated abstracts, workshop conferences, cooperative authorship, etc. Throughout the project relevant data will be gathered to enable the assessment of system and user performance, cost, usefulness, and acceptability.

Journal ArticleDOI
TL;DR: The purpose of this note is to draw attention to an automatic short cut which can be used with Bresenham's algorithm, which operates as Euclid's algorithm to give a short cut through a continued fraction expansion of the gradient of the intended line.
Abstract: Bresenham's algorithm is well known as an efficient and elegant control program for drawing a best fit approximation to straight lines of arbitrary gradient with incremental plotters. More recently attention has been given to run line coding,' in which a line of gradient 1/3, for example, is generated through the sequence square move (x-step), diagonal move (step x and y together), square move, repeated as many times as necessary without repeating the tests and additions every time as implied by the basic algorithm. The purpose of this note is to draw attention to an automatic short cut which can be used with Bresenham's algorithm as shown in Fig. 1. The short cut instructions do not effect the output string, but they operate as Euclid's algorithm to give a short cut through a continued fraction expansion of the gradient of the intended line. The initial conditions for the algorithm of Fig. 1 are set up as for Bresenham's algorithm itself. If, for example, we wish to draw a straight line from some origin (0,0) to the point (w, v) in the 'first octant', i.e. 0 < v < u, with a conventional incremental plotter, we start with b = 2v,

Journal ArticleDOI
TL;DR: The terminology associated with cluster analysis; the fields of study in which the techniques have been practised; the published books; the journals containing the most significant papers; the broad nature of the published papers and a list of the computer packages available for performing a cluster analysis are considered.
Abstract: Classification and clustering has become an increasingly popular method of multivariate analysis over the past two decades, and with it has come a vast amount of published material. Since there is no journal devoted exclusively to cluster analysis as a general topic and since it has been used in many fields of study, the novice user is faced with the daunting prospect of searching through a multitude of journals for appropriate references. In order to organize this diverse and voluminous material the following points will be considered: the terminology associated with cluster analysis; the fields of study in which the techniques have been practised; the published books; the journals containing the most significant papers; the broad nature of the published papers and a list of the computer packages available for performing a cluster analysis. An exhaustive list of books is given but a comprehensive list of papers is not included, mainly because these are provided by the books cited and by Cormack's 1971 review. Finally, a few thoughts on the state of the literature of cluster analysis are given.

Journal ArticleDOI
TL;DR: The main objective of the paper is to prove the applicability of computer science to the early stages of information systems development projects, where the advantages and the impact of such tools matters most.
Abstract: This paper presents a set of tools for analysis, design and implementation of information systems. The tools presented have all been proven practically during several major international information systems development projects. The tools include a set of meta languages or documentation conventions which have been designed for maximal ease of use and understanding. The documentation notations are related to a common reference frame and to the usage of a systems encyclopaedia. The main emphasis of the paper is an attempt to illustrate the application of finite machine theory to the description of entity lifecycles in an information system environment. A method is provided for extending the finite state machine model from the systems analysis stage through systems design to the final implementation of an event driven information system. The main objective of the paper is to prove the applicability of computer science to the early stages of information systems development projects, where the advantages and the impact of such tools matters most. It is hoped that this will aid the process of eliminating the 'L'art pour Tart' which is too often attached to computer science.


Journal ArticleDOI
TL;DR: An efficient algorithm for generating a sequence of all the permutations P(i) on N symbols using parallel processors, all of which perform identical operations, is presented.
Abstract: An efficient algorithm for generating a sequence of all the permutations P(i) on N symbols using parallel processors, all of which perform identical operations, is presented (0 /"(/) is given explicitly.


Journal ArticleDOI
TL;DR: The difficulties of solving complex synchronizing problems by using standard semaphore primitives lead us to propose that special-purpose synchronization techniques should be supported by a judicious combination of hardware/microcode and software routines.
Abstract: A weakness in the reader priority solution proposed by Curtois, Heymans and Paraas for the problem of synchronizing concurrent readers and writers is described and an improvement is explained. The difficulties of solving complex synchronizing problems by using standard semaphore primitives, as illustrated by this example, lead us to propose that special-purpose synchronization techniques should be supported by a judicious combination of hardware/microcode and software routines. We then describe an efficient solution for the reader/writer problem which is easy to understand, to implement and to use.

Journal ArticleDOI
TL;DR: The components and characteristics of Distributed Decision Making systems are stated and justified, and the possible advantages of using production systems to improve the explanations of decisions is considered.
Abstract: Decision support systems are described, together with previous work on how they support organizational and group tasks. A case study illustrates the need for several linked decision support systems in a manufacturing company and the nature of co-operation and conflict in organizations is discussed. The components and characteristics of Distributed Decision Making systems are then stated and justified. The possible advantages of using production systems to improve the explanations of decisions is considered. Plans for the future development of supportive software are outlined.

Journal ArticleDOI
TL;DR: The first successful learning programs were developed in the 1950s and belonged to a general category which was at that time commonly known as 'hill-climbing', which embraces classical adaptive control, along with many studies of machine learning in games and game-like situations.
Abstract: The first successful learning programs were developed in the 1950s and belonged to a general category which was at that time commonly known as 'hill-climbing'. Global mathematical models of system performance were typically constructed in forms permitting multi-dimensional representation in systems of orthogonal co-ordinate axes. Numerical parameters were then automatically tuned at run time in response to sensed deviations from computed criteria of optimality. This scheme embraces classical adaptive control, along with many studies of machine learning in games and game-like situations. How the problem appeared to AI workers of that epoch can be gleaned from the proceedings of a celebrated symposium held in 1957 under the title: Mechanisation of Thought Processes (HMSO, London). The present paper is the second of a series. The first appeared nearly 20 years ago and proposed a new principle for attaining the above purposes, now known as 'rule-based' learning. The idea was to partition the problem-domain into a mosaic of smaller sub-domains and to associate a separate rule of action with each. In the pre-learning state a rule may be stochastic or even vacuous. In the learning mode, entry into a given subdomain invokes a global procedure for collecting data on the sensed consequences of executing the associated rule and for up-dating the rule's content in the light of these. A worked toy example was supplied in the form of a computer simulation of a machine for learning to play Noughts and Crosses (Tic-Tac-Toe). Distinct equivalence classes into which the positions can be grouped were taken as the separate sub-domains. These were represented as separate 'boxes', as in a card-filing system. Successful tests of the 'boxes' principle were subsequently made on a hard dynamical problem, namely automatically controlling the support point of an inverted pendulum within a bounded space. The adaptive polebalancer was required to deliver 20 left-right decisions per second to a motor-controlled cart on which was balanced a pole free to move in the vertical plane defined by a straight bounded track (Fig. I). The task of the BOXES program was to acquire by trial and error, or from being shown by a human tutor, or from a combination of both, the ability to control the cart within the bounds set by the ends of the track without permitting the pole's angular deviation from the vertical to exceed a pre-set tolerance. For experimental runs the precise specification was: the system fails if any of four monitored variables (position on track, velocity, pole angle, angular velocity of pole) pass outside fixed bounds. Initially decisions were taken randomly, and fail-free periods were measured in seconds. After learning, the fail-free periods lasted half an hour or more. BOXES was the first system to be driven by a set of independently modifiable production rules, and thus foreshadowed today's 'expert systems'. The mode of learning was primitive, being confined to revising the action-recommendations associated with stored situation-patterns, the latter being fixed. The next series of experiments at Edinburgh, involving computer-coordination of hand-eye robots, focused on the situationperception component of machine learning. In the FREDDY robot work, the learning module built new patterns in memory as the basis of adaptive perception of situation-categories. The stored structures were descriptions in semantic net form of the visual appearances of various objects such as cup, spectacles, axle, wheel, and so forth. From these the system was required at run time to recognize instances from images sampled from the television camera acting as the robot's eye. In robot vision 'programming by example' becomes a necessity. The infeasibility of programming in the ordinary sense is apparent if one compares the police task of identifying a culprit from photographs with identification purely on the basis of verbal description. The Edinburgh, versatile assembly program was thus operating in the foothills of a domain now commonly

Journal ArticleDOI
TL;DR: The following problem is studied: consider a hash file and the longest probe sequence that occurs when retrieving a record, and how long is this probe sequence expected to be?
Abstract: The following problem is studied: consider a hash file and the longest probe sequence that occurs when retrieving a record. How long is this probe sequence expected to be? The approach taken differs from traditional worst-case considerations, which consider only the longest probe sequence of the worst possible file instance. Three overflow handling schemes are analysed: uniform hashing (random probing), linear probing and separate chaining. The numerical results show that the worst-case performance is expected to be quite reasonable. Provided that the hashing functions used are well-behaved, extremely long probe sequences are very unlikely to occur.

Journal ArticleDOI
TL;DR: It is possible to interpolate in two dimensions from scattered data points using a stochastic process model, giving an interpolating function which is continuous in all derivatives, passes exactly through the points given and does not generate spurious features in regions of no data.
Abstract: It is possible to interpolate in two dimensions from scattered data points using a stochastic process model, giving an interpolating function which is continuous in all derivatives, passes exactly through the points given and does not generate spurious features in regions of no data. Using this interpolating method, contours are produced directly from the data without an intermediate grid. Extensions of the basic model include a two-stage model which allows for a long- range trend. Thus the values y may be calculated once for all. This step involves inverting the n x n correlation matrix for the known points, and if n is large it may be more efficient to partition the points into groups, and calculate the y- values separately for each group, taking into account neighbouring points. The values of y may be considered to be 'uncorrelated' data values, wherein the fact that the data values are spatially correlated has been removed from the data. Thus each interpolating function evaluation requires the calculation of n correlation values, and a multiplica- tion with a constant vector. With little additional work, it is possible to calculate the first and second derivatives of the interpolating function, and these are used in the contour tracing. Details of the theory of this interpolating method are to be found in the other paper. 1 The two parameters, fi and p, may be estimated from the known data points, as described in the earlier paper, or can be chosen to suit the user of the system. The 'correlation distance' p may be considered to be the distance over which spatial correlation between two points is appreciable. The 'grand mean' n is the limiting value to which the interpolating function tends far away from any known data points.

Journal ArticleDOI
C. R. Symons1, P. Tijsma1
TL;DR: A methodology for formal definition of data elements using standardized terminology was developed within N. V. Philips Gloeilampenfabrieken, Eindhoven, and can be used to define data elements both in intercompany messages as well as in local systems and databases.
Abstract: In methods and standards for the exchange of information, whether via transmissions over data communications networks or via shared databases, relatively little attention has been given to methods for exactly defining the meaning of the information being exchanged. In complex information systems, it is important to define the unambiguous meaning of data elements out of the context of particular messages, database records, or applications in .which they currently appear. To help solve these problems a methodology for formal definition of data elements using standardized terminology was developed within N. V. Philips Gloeilampenfabrieken, Eindhoven. The method is complementary to present-day naming of data items, is based on sound theory of data analysis and can be used to define data elements both in intercompany messages as well as in local systems and databases.

Journal ArticleDOI
TL;DR: The purpose of this note is to describe a method for computing the chromatic index of STS, and to describe results obtained in testing this method on small STS.
Abstract: A Steiner triple system of order v, denoted STS (v), is a pair (V, B); V is a u-set of elements and B is a collection of 3-subsets of V called blocks. Each unordered pair of elements is contained in precisely one block. There is a substantial body of research on STS, partially because of their wide applicability in the design of experiments, and in the theory of error-correcting codes. In the design of statistical experiments, each block corresponds to a 'test' of the three elements in the block. In this context, we can view disjoint blocks as tests which can be carried out simultaneously. In terms of STS, we define a colour class to be a set of pairwise disjoint blocks. A k-block colouring is a partition of the block set into k colour classes; the chromatic index is the least k for which such a colouring exists. In our example, the chromatic index is precisely the time required for the entire experiment. In other applications of STS, similar reasons exist for studying the chromatic index. The chromatic index of STS is also a problem of some interest in combinatorics as a restriction of certain investigations of set systems. In computer science, further motivation arises from the substantial interest in the chromatic index of graphs, and applications in scheduling (see Ref. 5 and references therein). The purpose of this note is to describe a method for computing the chromatic index of STS, and to describe results obtained in testing this method on small STS.