scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Computers in 1976"


Journal ArticleDOI
TL;DR: Parzen estimators are often used for nonparametric estimation of probability density functions and a problem-dependent criterion for its value is proposed and illustrated by some examples.
Abstract: Parzen estimators are often used for nonparametric estimation of probability density functions. The smoothness of such an estimation is controlled by the smoothing parameter. A problem-dependent criterion for its value is proposed and illustrated by some examples. Especially in multimodal situations, this criterion led to good results.

364 citations


Journal ArticleDOI
TL;DR: In this article, a ranging system, consisting of a laser, computer-controlled optical deflection assembly, and TV camera, obtains three-dimensional images of curved solid objects, which are then segmented into parts by grouping parallel traces obtained from the ranging system.
Abstract: A ranging system, consisting of a laser, computer-controlled optical deflection assembly, and TV camera, obtains three-dimensional images of curved solid objects. The object is segmented into parts by grouping parallel traces obtained from the ranging system. Making use of the property of generalized translational invariance, the parts are described in terms of generalized cylinders, consisting of a space curve, or axis, and a circular cross section function on this axis.

284 citations


Journal ArticleDOI
TL;DR: The problem of automatic fault diagnosis of systems decomposed into a number of interconnected units is considered by using a simplified version of the diagnostic model introduced by Preparata et al. and it is shown that the procedure for diagnosis with repair has very small complexity.
Abstract: The problem of automatic fault diagnosis of systems decomposed into a number of interconnected units is considered by using a simplified version of the diagnostic model introduced by Preparata et al. The model used in this paper is supposed to be a realistic representation of systems where each unit has a considerable computational capability. For any system of n units whose set of testing links is given, necessary and sufficient conditions for t-diagnosability are presented in both cases of one-step diagnosis and diagnosis with repair, and it is shown that the procedure for diagnosis with repair has very small complexity. The problem of optimal assignment of testing links in order to achieve a given diagnosability is also considered and classes of optimal t-diagnosable systems are presented for arbitrary values of t in both cases of one-step diagnosis and diagnosis with repair.

270 citations


Journal ArticleDOI
D. A. Huffman1
TL;DR: In this paper, the authors present fundamental results about how zero-curvature (paper) surfaces behave near creases and apices of cones, which are natural generalizations of the edges and vertices of piecewise-planar surfaces.
Abstract: This paper presents fundamental results about how zero-curvature (paper) surfaces behave near creases and apices of cones. These entities are natural generalizations of the edges and vertices of piecewise-planar surfaces. Consequently, paper surfaces may furnish a richer and yet still tractable class of surfaces for computer-aided design and computer graphics applications than do polyhedral surfaces.

263 citations


Journal ArticleDOI
TL;DR: An approach to fault-tolerant design is described in which a computing system S and an algorithm A to be executed by S are both defined by graphs whose nodes represent computing facilities.
Abstract: An approach to fault-tolerant design is described in which a computing system S and an algorithm A to be executed by S are both defined by graphs whose nodes represent computing facilities. A is executable by S if A is isomorphic to a subgraph of S.A k-fault is the removal of k nodes (facilities) from S.S is a k-fault tolerant (k-FT) realization of A if A can be executed by S with any k-fault present in S. The problem of designing optimal k-FT systems is considered where A is equated to a 0-FT system. Techniques are described for designing optimal k-FT realizations of single-loop systems; these techniques are related to results in Hamiltonian graph theory. The design of optimal k-FT realizations of certain types of tree systems is also examined. The advantages and disadvantages of the graph model are discussed.

227 citations


Journal ArticleDOI
TL;DR: According to the results of simulation on nearly 1000 high occurrence English words, higher error correcting rates can be achieved by this method than any other method tried to date.
Abstract: In this paper we propose a new method for correcting garbled words based on Levenshtein distance and weighted Levenshtein distance. We can correct not only substitution errors, but also insertion errors and deletion errors by this method. According to the results of simulation on nearly 1000 high occurrence English words, higher error correcting rates can be achieved by this method than any other method tried to date. Hardware realization of the method is possible, though it is rather complicated.

207 citations


Journal ArticleDOI
Koontz1, Narendra, Fukunaga
TL;DR: This paper presents a noniterative, graph-theoretic approach to nonparametric cluster analysis that is governed by a single-scalar parameter, requires no starting classification, and is capable of determining the number of clusters.
Abstract: Nonparametric clustering algorithms, including mode-seeking, valley-seeking, and unimodal set algorithms, are capable of identifying generally shaped clusters of points in metric spaces. Most mode and valley-seeking algorithms, however, are iterative and the clusters obtained are dependent on the starting classification and the assumed number of clusters. In this paper, we present a noniterative, graph-theoretic approach to nonparametric cluster analysis. The resulting algorithm is governed by a single-scalar parameter, requires no starting classification, and is capable of determining the number of clusters. The resulting clusters are unimodal sets.

197 citations


Journal ArticleDOI
O'Gorman1, Clowes1
TL;DR: In this paper, a parametric representation of straight picture edges and its procedural deployment in the recovery of edges from digitizations of scenes whose contents are essentially polyhedra with strong visible shadows are described.
Abstract: The recovery of straight picture edges from digitizations of scenes containing polyhedra ("line finding") is central to the functioning of scene analysis programs. While recognizing that recovery properly involves a computational mobilization of a great deal of knowledge-supported context, there remain some basic issues of representation which govern the way in which the primary data—grey levels—are addressed. The paper describes a parametric representation of straight picture edges and its procedural deployment in the recovery of edges from digitizations of scenes whose contents are essentially polyhedra with strong visible shadows.

195 citations


Journal ArticleDOI
TL;DR: A unified matrix treatment for the various orderings of the Walsh-Hadamard (WH) functions using a general framework is presented in this article, which clarifies the different definitions of the WH matrix, the various fast algorithms and the reorderings of WH functions.
Abstract: A unified matrix treatment is presented for the various orderings of the Walsh-Hadamard (WH) functions using a general framework. This approach clarifies the different definitions of the WH matrix, the various fast algorithms and the reorderings of the WH functions.

186 citations


Journal ArticleDOI
Muth1
TL;DR: A nine-valued circuit model for test generation is introduced which takes care of multiple and repeated effects of a fault in sequential circuits to derive valid test sequences where other known procedures do not find any test although one exists.
Abstract: A nine-valued circuit model for test generation is introduced which takes care of multiple and repeated effects of a fault in sequential circuits. Using this model test sequences can be determined which allow multiple and repeated effects of faults on the internal state of a sequential circuit. Thus valid test sequences are derived where other known procedures, like the D-algorithm, do not find any test although one exists.

174 citations


Journal ArticleDOI
TL;DR: In this article, basic concepts, motivation, and techniques of fault tolerance are discussed, including fault classification, redundancy techniques, reliability modeling and prediction, examples of fault-tolerant computers, and some approaches to the problem of tolerating design faults.
Abstract: Basic concepts, motivation, and techniques of fault tolerance are discussed in this paper. The topics include fault classification, redundancy techniques, reliability modeling and prediction, examples of fault-tolerant computers, and some approaches to the problem of tolerating design faults.

Journal ArticleDOI
TL;DR: A very brief survey of recent developments in basic pattern recognition and image processing techniques is presented.
Abstract: Extensive research and development has taken place over the last 20 years in the areas of pattern recognition and image processing. Areas to which these disciplines have been applied include business (e. g., character recognition), medicine (diagnosis, abnormality detection), automation (robot vision), military intelligence, communications (data compression, speech recognition), and many others. This paper presents a very brief survey of recent developments in basic pattern recognition and image processing techniques.

Journal ArticleDOI
TL;DR: A syntactic approach and, in particular, a tree system may be used to represent and classify fingerprint patterns and a grammatical inference system is developed for the inference of complex structures.
Abstract: The purpose of this paper is to demonstrate how a syntactic approach and, in particular, a tree system may be used to represent and classify fingerprint patterns. The fingerprint impressions are subdivided into sampling squares which are preprocessed and postprocessed for feature extraction. A set of regular tree languages is used to describe the fingerprint patterns and a set of tree automata is used to recognize the coded patterns. In order to infer the structural configuration of the encoded fingerprints, a grammatical inference system is developed. This system utilizes a simple procedure to infer the numerous substructures and relies on a reachability matrix and a man-machine interactive technique for the inference of complex structures. The 92 fingerprint impressions were used to test the proposed approach. A set of 193 tree grammars was inferred from each sampling square of the 4 × 4 sampling matrix which is capable of generating about 2 × 1034 classes for the fingerprint patterns.

Journal ArticleDOI
TL;DR: It is shown that the degree of detectability and distinguishability of faults obtainable by TC testing is less than that obtained by conventional testing.
Abstract: Logic circuits are usually tested by applying a sequence of input patterns S to the circuit under test and comparing the observed response sequence R bit by bit to the expected response Ro. The transition count (TC) of R, denoted c(R), is the number of times the signals forming R change value. In TC testing c(R) is recorded rather than R. A fault is detected if the observed TC c(R) differs from the correct TC c(Ro). This paper presents a formal analysis of TC testing. It is shown that the degree of detectability and distinguishability of faults obtainable by TC testing is less than that obtainable by conventional testing. t is argued that the TC tests should be constructed to maximize or minimize c(Ro). General methods are presented for constructing complete TC tests to detect both single and multiple stuck-line faults in combinational circuits. Optimal or near-optimal test sequences are derived for one-and two-level circuits. The use of TC testing for fault location is examined, and it is concluded that TC tests are relatively inefficient for this purpose.

Journal ArticleDOI
TL;DR: Existence of a class of systems requiring as little as n + T - 1 tests is shown, which improves significantly upon the previously best knownclass of systems that required n + 2T - 2 tests for sequential T-fault diagnosability.
Abstract: This paper is concerned with automatic fault diagnosis for digital systems with multiple faults. Three problems are treated: 1) Probabilistic fault diagnosis is presented using the graph-theoretic model of Preparata et al. The necessary and sufficient conditions to correctly diagnose any fault set whose probability of occurrence is greater than t have been developed. Some simple sufficient conditions are also discussed. 2) A general model that contains as special cases both the graph-theoretic and the Russell-Kime models is developed. Conditions for T-fault diagnosability are given, thus settling some open problems introduced by Russell and Kime. 3) Finally, sequential T-fault diagnosability is considered. Existence of a class of systems requiring as little as n + T - 1 tests is shown. This improves significantly upon the previously best known class of systems that required n + 2T - 2 tests for sequential T-fault diagnosability.

Journal ArticleDOI
TL;DR: The authors described the structure and operation of the Hearsay-I1speech understanding system by the use of a specific example illustrating the various stages of recognition, which consists of a set of cooperating independent processes, each representing a source of knowledge, either to predict what may appear in a given context or to verify hypotheses resulting from a prediction.
Abstract: This paper describes the structure and operation of the Hearsay-I1speech understanding system by the use of a specific example illustrating the various stages of recognition. The system consists of a set of cooperating independent processes, each representing a source of knowledge. The knowledge is used either to predict what may appear in a given context or to verify hypotheses resulting from a prediction. The structure of the system is illustrated by considering its operation in a particular task situation: Voice-Chess. The representation and use of various sources of knowledge are outlined. Preliminary results of the reduction in search resulting from the use of various sources of knowledge are given.

Journal ArticleDOI
TL;DR: A simple formula is derived for the distance between two points on a hexagonal grid, in terms of coordinates with respect to a pair of oblique axes.
Abstract: A simple formula is derived for the distance between two points on a hexagonal grid, in terms of coordinates with respect to a pair of oblique axes.

Journal ArticleDOI
TL;DR: A large and varied problem set, complete clause sets for use in testing automated theorem-proving programs and the presentation of a number of experiments with an existing program under a variety of conditions are given.
Abstract: The two objectives of this paper are 1) to give a large and varied problem set, complete clause sets for use in testing automated theorem-proving programs and 2) the presentation of a number of experiments with an existing program under a variety of conditions.

Journal ArticleDOI
TL;DR: It is shown that the shuffle-exchange interconnection network permits the efficient partitioning of an array computer into subarrays to allow for the simultaneous computation of several identical problems.
Abstract: In this paper, a control mechanism for a shuffle-exchange interconnection network of N cells is proposed. With this network it is possible to realize some important permutations in log 2 N shuffle-exchange steps. In the control mechanism presented, the control variables at step k are determined by a Boolean operation of the control variables at step k ?1. The Boolean operation is very simple so that little additional hardware is required for this computation. This control scheme requires only one bit per cell instead of a destination tag of log 2 N bits required by a control mechanism presented previously. The network can be used for the interconnection of memory modules and processors in an array computer, and for the accessing of blocks of consecutive data in large dynamic memories. It is also shown that the shuffle-exchange interconnection network permits the efficient partitioning of an array computer into subarrays to allow for the simultaneous computation of several identical problems.

Journal ArticleDOI
TL;DR: A hierarchic computer procedure for the detection of nodular tumors in a chest radiograph which is enhanced and analyzed by a hierarchic tumor recognition process in the form of a ladder-like decision tree.
Abstract: We describe a hierarchic computer procedure for the detection of nodular tumors in a chest radiograph. The radiograph is scanned and consolidated into several resolutions which are enhanced and analyzed by a hierarchic tumor recognition process. The hierarchic structure of the tumor recognition process has the form of a ladder-like decision tree. The major steps in the decision tree are: 1) find the lung regions within the chest radiograph, 2) find candidate nodule sites (potential tumor locations) within the lung regions, 3) find boundaries for most of these sites, 4) find nodules from among the candidate nodule boundaries, and 5) find tumors from among the nodules. The first three steps locate potential nodules in the radiograph. The last two steps classify the potential nodules into nonnodules, nodules which are not tumors, and nodules which are tumors.

Journal ArticleDOI
TL;DR: For the majority of the techniques studied, much further work remains to be done before any practical applications can be foreseen, however some methods however constitute steps in the right directions.
Abstract: The application of microprogramming in present day computers is rapidly increasing and microprogramming will undoubtedly play a major role in the next generation of computer systems. Microprogram optimization is one way to increase efficiency and can be crucial in some applications. Optimization, in this context refers to a reduction/minimization of control store and/or execution time of microprograms. The numerous strategies are classified under four broad categories: word dimension reduction, bit dimension reduction, state reduction, and heuristic reduction. The various techniques are presented, analyzed, and compared. Unfortunately, the results of the survey are not too positive. The reason is that much of the work on optimization has been devoted to obtaining the absolute minimum solutions rather than "good engineering reductions." Whether the reduction is being performed with respect to the word dimension, the bit dimension or the number of states existing techniques to obtain the optimum solution use exhaustive enumeration. Thus, the effort involved is prohibitive and there are no guarantees that significant reductions can be obtained. It is thus doubtful that an optimum solution can be justified even when the microcode produced is frequently executed. Heuristic reduction techniques do not guarantee an optimum solution but can provide some reduction with little effort. For the majority of the techniques studied, much further work remains to be done before any practical applications can be foreseen. Some methods however constitute steps in the right directions. Directions for future research are briefly outlined in the conclusions.

Journal ArticleDOI
TL;DR: A network is proposed that permits the realization of any permutation in 0([mi][/mi]N) shuffle-exchange steps and an efficient procedure is described for the realizing of a shuffle permutation of N elements on an array computer with M memory modules where M < N.
Abstract: The shuffle-exchange network is considered as an interconnection network between processors and memory modules in an array computer. Lawrie showed that this network can be used to perform some important permutations in log2 N steps. This work is extended and a network is proposed that permits the realization of any permutation in 0([mi][/mi]N) shuffle-exchange steps. Additional modifications to the basic. procedure are presented that can be applied to perform efficiently some permutations that were not realizable with the original mechanism. Finally, an efficient procedure is described for the realization of a shuffle permutation of N elements on an array computer with M memory modules where M < N.

Journal ArticleDOI
Sickel1
TL;DR: A new representation and technique for proving theorems automatically that is both computationally more effective than resolution and permits a clear and concise formal description is presented.
Abstract: This paper presents a new representation and technique for proving theorems automatically that is both computationally more effective than resolution and permits a clear and concise formal description. A problem in automatic theorem proving can be specified by a set of clauses, containing literals, that represents a set of axioms and the negation of a theorem to be proved. The set of clauses can be replaced by a graph in which the nodes represent literals and the edges link unifiable complements. The nodes are partitioned by clause membership, and the edges are labeled with a most general unifying substitution. Given this representation, theorem proving becomes a graph-searching problem. The search technique presented here, in effect, unrolls the graph into sets of solution trees. The trees grow in a well-defined breadth-first way that defines a measure of proof complexity.

Journal ArticleDOI
Losq1
TL;DR: Self-purging redundancy is presented, a scheme that uses a threshold voter and that purges the failed modules and is compared with other redundancy schemes for their relative merits in reliability gain, simplicity, cost, and confidence in the reliability estimation.
Abstract: The goals of this paper are to present an efficient redundancy scheme for highly reliable systems, to give a method to compute the exact reliability of such systems and to compare this scheme with other redundancy schemes. This redundancy scheme is self-purging redundancy, a scheme that uses a threshold voter and that purges the failed modules. Switches for self-purging systems are extremely simple: there is no replacement of the failed modules and module purging is quite simply implemented. Because of switch simplicity, exact reliability calculations re possible. The effects of switch reliability are quantitatively examined. For short mission times, switch reliability is the most important factor: self-purging systems have a probability of failure several times larger than the figure obtained when switches are assumed to be perfect. The influence of the relative frequency of the diverse types of failures (permanent, intermittent, stuck-at, multiple,...) is also investigated. Reliability functions, mission time improvements, and switch efficiency are computed and displayed. Self-purging systems are compared with other redundant systems, like hybrid or NMR, for their relative merits in reliability gain, simplicity, cost, and confidence in the reliability estimation. The high confidence in the reliability evaluation of self-purging systems makes them a standard for the validation of several models that have been proposed to take into account switch reliability. The accuracy of the models using coverage factors can be evaluated in this way.

Journal ArticleDOI
TL;DR: Two fundamental solutions of the metastable-state problem in the clocked systems are described and two well-known methods of reducing failure probability for SN74S74 are evaluated.
Abstract: This paper deals with an anomalous behavior of input synchronizers which results in the occurrence of random errors in asynchronously interfaced synchronous digital systems. The errors are caused by the undefined response time of a flip-flop as it recovers from its metastable state. To obtain their frequency, the timing diagram of the flip-flops has been analyzed and the probability distribution of the anomalous response times has been measured. As an example, maximum response time of SN74S74 is estimated on the basis of a set of statistical measurements. The measurement technique presented may be used for any type of input synchronizer. Two well-known methods of reducing failure probability for SN74S74 are evaluated. Two fundamental solutions of the metastable-state problem in the clocked systems are described.

Journal ArticleDOI
TL;DR: In a number of applications of image processing, much information about objects or textures in the image can be obtained by sequential analysis of individual scan lines.
Abstract: In a number of applications of image processing, much information about objects or textures in the image can be obtained by sequential analysis of individual scan lines.

Journal ArticleDOI
TL;DR: The existence of many refutations with the same mating leads to wasteful redundancy in the search for a refutation, so it is natural to focus on the essential problem of finding appropriate matings.
Abstract: Occurrences of literals in the initial clauses of a refutation by resolution (with each clause-occurrence used only once) are mated iff their descendants are resolved with each other. This leads to an abstract notion of a mating as a relation between: occurrences of literals in a set of clause-occurrences. The existence of many refutations with the same mating leads to wasteful redundancy in the search for a refutation, so it is natural to focus on the essential problem of finding appropriate matings.

Journal ArticleDOI
Jeffrey H. Hoel1
TL;DR: This paper discusses some variations of Lee's algorithm which can be used in certain contexts to improve its efficiency and shows that by storing frontier cells in an array of stacks rather than a single list, costly searching operations can be eliminated without significantly increasing storage requirements.
Abstract: Lee's algorithm is a pathfinding algorithm, which is often used in computer-aided design systems to route wires on printed circuit boards. This paper discusses some variations of Lee's algorithm which can be used in certain contexts to improve its efficiency. First, it is shown that, by storing frontier cells in an array of stacks rather than a single list, costly searching operations can be eliminated without significantly increasing storage requirements. Second, it is shown that if each path's cost is the sum of the weights of its cells then retrace codes can be assigned to cells as soon as they are reached rather than when they are expanded. Third, it is shown that if the additional restriction is made that each cell's weight is not a function of the state of any nonneighbor cell, then an encoding scheme requiring only two bits/cell can be used for both rectangular and hexagonal grids.

Journal ArticleDOI
TL;DR: A model to characterize the error latency of a fault in a sequential circuit is presented and it is shown that there is typically a delay between the occurrence of a faults and the first error in the output.
Abstract: In digital circuits there is typically a delay between the occurrence of a fault and the first error in the output. This delay is the error latency of the fault. A model to characterize the error latency of a fault in a sequential circuit is presented.

Journal ArticleDOI
Haralick1
TL;DR: This correspondence shows that the amount of work can be cut to doing two single length FFT's, which is equivalent to doing one double length fast Fourier transform.
Abstract: Ahmed has shown that a discrete cosine transform can be implemented by doing one double length fast Fourier transform (FFT). In this correspondence, we show that the amount of work can be cut to doing two single length FFT's.