scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Computers in 1971"


Journal ArticleDOI
TL;DR: A family of graph-theoretical algorithms based on the minimal spanning tree are capable of detecting several kinds of cluster structure in arbitrary point sets; description of the detected clusters is possible in some cases by extensions of the method.
Abstract: A family of graph-theoretical algorithms based on the minimal spanning tree are capable of detecting several kinds of cluster structure in arbitrary point sets; description of the detected clusters is possible in some cases by extensions of the method. Development of these clustering algorithms was based on examples from two-dimensional space because we wanted to copy the human perception of gestalts or point groupings. On the other hand, all the methods considered apply to higher dimensional spaces and even to general metric spaces. Advantages of these methods include determinacy, easy interpretation of the resulting clusters, conformity to gestalt principles of perceptual organization, and invariance of results under monotone transformations of interpoint distance. Brief discussion is made of the application of cluster detection to taxonomy and the selection of good feature spaces for pattern recognition. Detailed analyses of several planar cluster detection problems are illustrated by text and figures. The well-known Fisher iris data, in four-dimensional space, have been analyzed by these methods also. PL/1 programs to implement the minimal spanning tree methods have been fully debugged.

1,832 citations


Journal ArticleDOI
TL;DR: Given a vector of N elements, the perfect shuffle of this vector is a permutation of the elements that are identical to aperfect shuffle of a deck of cards.
Abstract: Given a vector of N elements, the perfect shuffle of this vector is a permutation of the elements that are identical to a perfect shuffle of a deck of cards. Elements of the first half of the vector are interlaced with elements of the second half in the perfect shuffle of the vector.

1,331 citations


Journal ArticleDOI
TL;DR: Simple sets of parallel operations are described which can be used to detect texture edges, "spots," and "streaks" in digitized pictures and it is shown that a composite output is constructed in which edges between differently textured regions are detected, and isolated objects are also detected, but the objects composing the textures are ignored.
Abstract: Simple sets of parallel operations are described which can be used to detect texture edges, "spots," and "streaks" in digitized pictures. It is shown that, by comparing the outputs of the operations corresponding to (e.g.,) edges of different sizes, one can construct a composite output in which edges between differently textured regions are detected, and isolated objects are also detected, but the objects composing the textures are ignored. Relationships between this class of picture processing operations and the Gestalt psychologists' laws of pictorial pattern organization are also discussed.

811 citations


Journal ArticleDOI
TL;DR: A direct method of measurement selection is proposed to determine the best subset of d measurements out of a set of D total measurements, using a nonparametric estimate of the probability of error given a finite design sample set.
Abstract: A direct method of measurement selection is proposed to determine the best subset of d measurements out of a set of D total measurements. The measurement subset evaluation procedure directly employs a nonparametric estimate of the probability of error given a finite design sample set. A suboptimum measurement subset search procedure is employed to reduce the number of subsets to be evaluated. Teh primary advantage of the approach is the direct but nonparametric evaluation of measurement subsets, for the M class problem.

790 citations


Journal ArticleDOI
TL;DR: Partitions of the set of blocks of a computer logic graph, also called a block graph, into subsets called modules demonstrate that a two-region relationship exists between P, the average number of pins per module, and B, theaverage number of blocks per module.
Abstract: Partitions of the set of blocks of a computer logic graph, also called a block graph, into subsets called modules demonstrate that a two-region relationship exists between P, the average number of pins per module, and B, the average number of blocks per module. In the first region, P = KBr, where K is the average number of pins per block and 0.57 ≤ r ≤ 0.75. In the second region, that is, where the number of modules is small (i.e., 1-5), P is less than predicted by the above formula and is given by a more complex relationship. These conclusions resulted from controlled partitioning experiments performed using a computer program to partition four logic graphs varying in size from 500 to 13 000 circuits representing three different computers. The size of a block varied from one NOR circuit in one of the block graphs to a 30-circuit chip in one of the other block graphs.

725 citations


Journal ArticleDOI
Henri Gouraud1
TL;DR: The surface is approximated by small polygons in order to solve easily the hidden-parts problem, but the shading of each polygon is computed so that discontinuities of shade are eliminated across the surface and a smooth appearance is obtained.
Abstract: A procedure for computing shaded pictures of curved surfaces is presented. The surface is approximated by small polygons in order to solve easily the hidden-parts problem, but the shading of each polygon is computed so that discontinuities of shade are eliminated across the surface and a smooth appearance is obtained. In order to achieve speed efficiency, the technique developed by Watkins is used which makes possible a hardware implementation of this algorithm.

661 citations


Journal ArticleDOI
TL;DR: An algorithm for the analysis of multivariant data is presented along with some experimental results, and an analysis that demonstrates the feasability of this approach.
Abstract: An algorithm for the analysis of multivariant data is presented along with some experimental results. The basic idea of the method is to examine the data in many small subregions, and from this determine the number of governing parameters, or intrinsic dimensionality. This intrinsic dimensionality is usually much lower than the dimensionality that is given by the standard Karhunen-Loeve technique. An analysis that demonstrates the feasability of this approach is presented.

371 citations


Journal ArticleDOI
TL;DR: As computer CPUs get faster, primary memories tend to be organized in parallel banks, and important questions of design and use of such memories are discussed.
Abstract: As computer CPUs get faster, primary memories tend to be organized in parallel banks. The fastest machines now being developed can fetch of the order of 100 words in parallel. Unless memory and compiler designers are careful, serious memory conflicts and resulting performance degradation may result. Some of the important questions of design and use of such memories are discussed.

306 citations


Journal ArticleDOI
TL;DR: E Easily computable binary image characterizations are introduced, with reference to a serial binary image processor (BIP) now being built, and some implications for image computation theory are examined.
Abstract: Aspects of topology and geometry are used in analyzing continuous and discrete binary images in two dimensions. Several numerical properties of these images are derived which are " locally countable." These include the metric properties area and perimeter, and the topological invariant, Euler number. "Differentials" are defined for these properties, and algorithms are given. The Euler differential enables precise examination of connectivity relations on the square and hexagonal lattices. Easily computable binary image characterizations are introduced, with reference to a serial binary image processor (BIP) now being built. A precise definition of "localness" is given, and some implications for image computation theory are examined.

303 citations


Journal ArticleDOI
TL;DR: A tabular method where the essential prime implicants are selected during the process of forming the combination tables, and other essential terms are selected from what have been described in the note as chains of selective prime implICants.
Abstract: The Quine–McCluskey method of minimizing a Boolean function gives all the prime implicants, from which the essential terms are selected by one or more cover tables known as the prime implicant tables. This note describes a tabular method where the essential prime implicants are selected during the process of forming the combination tables, and other essential terms are selected from what have been described in the note as chains of selective prime implicants. Consequently, the need for successive prime implicant tables is eliminated.

292 citations


Journal ArticleDOI
TL;DR: An algorithm for generating Hilbert's space-filling curve in a byte-oriented manner and the algorithm may be modified so that the results are correct for continua rather than for quantized spaces.
Abstract: An algorithm for generating Hilbert's space-filling curve in a byte-oriented manner is presented. In the context of one application of space-filling curves, the algorithm may be modified so that the results are correct for continua rather than for quantized spaces.

Journal ArticleDOI
TL;DR: The following aspects of the STAR system are described: architecture, reliability analysis, software, automatic maintenance of peripheral systems, and adaptation to serve as the central computer of an outerplanet exploration spacecraft.
Abstract: This paper presents the results obtained in a continuing investigation of fault-tolerant computing which is being conducted at the Jet Propulsion Laboratory. Initial studies led to the decision to design and construct an experimental computer with dynamic (standby) redundancy, including replaceable subsystems and a program rollback provision to eliminate transient errors. This system, called the STAR computer, began operation in 1969. The following aspects of the STAR system are described: architecture, reliability analysis, software, automatic maintenance of peripheral systems, and adaptation to serve as the central computer of an outerplanet exploration spacecraft.

Journal ArticleDOI
TL;DR: This paper presents seven techniques for choosing good subsets of properties and compares their performance on a nine-class vectorcardiogram classification problem.
Abstract: The only guaranteed technique for choosing the best subset of N properties from a set of M is to try all (MN) possible combinations. This is computationally impractical for sets of even moderate size, so heuristic techniques are required. This paper presents seven techniques for choosing good subsets of properties and compares their performance on a nine-class vectorcardiogram classification problem.

Journal ArticleDOI
TL;DR: Several preprocessing techniques for enhancing selected features and removing irrelevant data are described and compared and a practical image pattern recognition problem is solved using some of the described techniques.
Abstract: Feature extraction is one of the more difficult steps in image pattern recognition. Some sources of difficulty are the presence of irrelevant information and the relativity of a feature set to a particular application. Several preprocessing techniques for enhancing selected features and removing irrelevant data are described and compared. The techniques include gray level distribution linearization, digital spatial filtering, contrast enhancement, and image subtraction. Also, several feature extraction techniques are illustrated. The techniques are divided into spatial and Fourier domain operations. The spatial domain operations of directional signatures and contour tracing are first described. Then, the Fourier domain techniques of frequency signatures and template matching are illustrated. Finally, a practical image pattern recognition problem is solved using some of the described techniques.

Journal ArticleDOI
TL;DR: The bounds on state-set size in the proofs of the equivalence between nondeterministic and deterministic finite automata and between two-way and one-way deterministic infinite automata are considered and it is shown that the number of states in the subset machine cannot be reduced for certain cases.
Abstract: The bounds on state-set size in the proofs of the equivalence between nondeterministic and deterministic finite automata and between two-way and one-way deterministic finite automata are considered. It is shown that the number of states in the subset machine in the first construction cannot be reduced for certain cases. It is also shown that the number of states in the one-way automation constructed in the second proof may be reduced only slightly.

Journal ArticleDOI
TL;DR: The conditions whereby two different faults can produce the same alteration in the circuit behavior are investigated and this relationship between two faults is shown to be an equivalence relation, and three different types of equivalence relations are specified.
Abstract: This paper is a study of the effects of faults on the logical operation of combinational (acyclic) logic circuits. In particular, the conditions whereby two different faults can produce the same alteration in the circuit behavior are investigated. This relationship between two faults is shown to be an equivalence relation, and three different types of equivalence relations are specified. Necessary and sufficient conditions for the existence of these equivalence relations are proved. An algorithm for determining the equivalence classes for one of the types of equivalence is presented. Other types of algebraic properties of faults are discussed.

Journal ArticleDOI
TL;DR: General criteria for cost and effectiveness studies of error codes are developed, and results are presented for arithmetic error codes with the low-cost check modulus 2a-1.
Abstract: The application of error-detecting or error-correcting codes in digital computer design requires studies of cost and effectiveness trade-offs to supplement the knowledge of their theoretical properties. General criteria for cost and effectiveness studies of error codes are developed, and results are presented for arithmetic error codes with the low-cost check modulus 2a-1. Both separate (residue) and nonseparate (AN) codes are considered. The class of multiple arithmetic error codes is developed as an extension of low-cost single codes.

Journal ArticleDOI
I.J. Good1
TL;DR: The purpose of this note is to show as clearly as possible the mathematical relationship between the two basic fast methods used for the calculation of discrete Fourier transforms and to generalize one of the methods a little further.
Abstract: The purpose of this note is to show as clearly as possible the mathematical relationship between the two basic fast methods used for the calculation of discrete Fourier transforms and to generalize one of the methods a little further. This method applies to all those linear transformations whose matrices are expressible as direct products.

Journal ArticleDOI
TL;DR: A high-speed array multiplier generating the full 34-bit product of two 17-bit signed (2's complement) numbers in 40 ns is described.
Abstract: A high-speed array multiplier generating the full 34-bit product of two 17-bit signed (2's complement) numbers in 40 ns is described. The multiplier uses a special 2-bit gated adder circuit with anticipated carry. Negative numbers are handled by considering their highest order bit as negative, all other bits as positive, and adding negative partial products directly through appropriate circuits. The propagation of sum and carry signals is such that sum delays do not significantly contribute to the overall multiplier delay.

Journal ArticleDOI
TL;DR: This paper summarizes the work done over the last four years on mathematical reliability modeling by the authors and discusses the mathematical equations involved for general computer systems organized to be fault tolerant.
Abstract: Reliability modeling and the mathematical equations involved are discussed for general computer systems organized to be fault tolerant. This paper summarizes the work done over the last four years on mathematical reliability modeling by the authors.

Journal ArticleDOI
TL;DR: This note describes a cellular array that computes the product of two arbitrary elements of the Galois field GF(2m) that should make it attractive for LSI fabrication.
Abstract: This note describes a cellular array that computes the product of two arbitrary elements of the Galois field GF(2m). The regularity of this array should make it attractive for LSI fabrication.

Journal ArticleDOI
G.R. Putzolu, J.P. Roth1
TL;DR: An algorithm for the computation of tests to detect failures in asynchronous sequential logic circuits based upon an extension of the D-algorithm is described.
Abstract: This paper describes an algorithm for the computation of tests to detect failures in asynchronous sequential logic circuits. It is based upon an extension of the D-algorithm [1]. Discussion of experience with a program of the procedure is given.

Journal ArticleDOI
TL;DR: A character recognition experiment is selected for exemplary purposes and the use of features in the rotated spaces results in effective minimum distance classification.
Abstract: An important aspect in mathematical pattern recognition is the usually noninvertible transformation from the pattern space to a reduced dimensionality feature space that allows a classification process to be implemented on a reasonable number of features. Such feature-selecting transformations range from simple coordinate stretching and shrinking to highly complex nonlinear extraction algorithms. A class of feature-selection transformations to which this note addresses itself is that given by multidimensional rotations. Unitary transformations of particular interest are the Karhunen-Loeve, Fourier, Hadamard or Walsh, and the Haar transforms. A character recognition experiment is selected for exemplary purposes and the use of features in the rotated spaces results in effective minimum distance classification.

Journal ArticleDOI
M.S. Schmookler, A. Weinberger1
TL;DR: The direct production of decimal sums offers a significant improvement in addition over methods requiring decimal correction, and these techniques are illustrated in the eight-digit adder which appears in the System/360 Model 195.
Abstract: Parallel decimal arithmetic capability is becoming increasingly attractive with new applications of computers in a multi-programming environment. The direct production of decimal sums offers a significant improvement in addition over methods requiring decimal correction. These techniques are illustrated in the eight-digit adder which appears in the System/360 Model 195.

Journal ArticleDOI
TL;DR: The flow-graph technique for constructing an expression is shown to preserve ambiguities of the graph, and thus, if the graph is that of a deterministic automaton, the expression is unambiguous.
Abstract: A regular expression is called unambiguous if every tape in the event can be generated from the expression in one way only. The flow-graph technique for constructing an expression is shown to preserve ambiguities of the graph, and thus, if the graph is that of a deterministic automaton, the expression is unambiguous. A procedure for generating a nondeterministic automaton which preserves the ambiguities of the given regular expression is described. Finally, a procedure for testing whether a given expression is ambiguous is given.

Journal ArticleDOI
TL;DR: The performance of the method of moments on these low-resolution images was found to be comparable to that of a human photointerpreter and to certain heuristic techniques that would be more difficult to implement than the methods of moments.
Abstract: The results of a study undertaken to determine the feasibility of automatic interpretation of ship photographs using the spatial moments of the image as features to characterize the image are reported The photo interpretation consisted of estimating the location, orientation, dimensions, and heading of the ship The study used simulated ship images in which the outline of the ship was randomly filled with black and white cells to give a low-resolution high-contrast image of the ship such as might be obtained by a high-resolution radar The estimates were made using polynomials of invariant moments formed by transformations of the original spatial moments, e g, density-invariant moments, central moments, rotation-invariant moments, etc The transformations to invariant moments were chosen using physical reasoning The best moments for the polynomials were chosen using linear regression The performance of the method of moments on these low-resolution images was found to be comparable to that of a human photointerpreter and to certain heuristic techniques that would be more difficult to implement than the method of moments

Journal ArticleDOI
TL;DR: The theoretical results of this paper show that near minimal tests for multiple faults can be generated with complexity of computation comparable to that of single faults.
Abstract: The important problem of generating test patterns to detect multiple faults has received little attention, mainly due to their computational complexity. The theoretical results of this paper show that near minimal tests for multiple faults can be generated with complexity of computation comparable to that of single faults.

Journal ArticleDOI
TL;DR: A method is described for determining an optimal straight-line segment approximation to specified functions for constrained and unconstrained endpoints.
Abstract: A method is described for determining an optimal straight-line segment approximation to specified functions for constrained and unconstrained endpoints.

Journal ArticleDOI
TL;DR: Two procedures are presented for generating fault detection test sequences for large sequential circuits using an adaptive random procedure and an algorithmic path-sensitizing procedure that employs a three-valued logic system.
Abstract: Two procedures are presented for generating fault detection test sequences for large sequential circuits. In the adaptive random procedure one can achieve a tradeoff between test generation time, length, and percent of circuit tested. An algorithmic path-sensitizing procedure is also presented. Both procedures employ a three-valued logic system. Some experimental results are given.

Journal ArticleDOI
TL;DR: It has been found, using both synthetic images as well as images taken from the real world, that Golay transforms are useful in feature enhancement and extraction.
Abstract: Golay hexagonal pattern transforms are position independent local operators for use in transforming or altering binary images. The hexagonal tessellation is preferred because it removes the connectivity ambiguity present in the square or checkerboard tessellation. Golay transforms also may be applied to multilevel or "gray" images by encoding such images as a registered stack of binary image planes. The general Golay transform creates a new binary image (the output image) from as many as three stacked input images. Simpler Golay transforms merely alter the binary pattern contained in a single image plane, i.e., the same plane acts as both input and output. Because it is slow and cumbersome to perform Golay transforms using a general-purpose computer, fast special-purpose computers have been built for this purpose which may be programmed in a new image processing language called Glol (Golay logic language). It has been found, using both synthetic images as well as images taken from the real world, that Golay transforms are useful in feature enhancement and extraction. Several illustrative examples are provided.