scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Computers in 1977"


Journal ArticleDOI
TL;DR: In this article, a fuzzy logic is used to synthesize linguistic control protocol of a skilled operator for industrial plants, which has been applied to pilot scale plants as well as in practical situations.
Abstract: This paper describes an application of fuzzy logic in designing controllers for industrial plants. A fuzzy logic is used to synthesize linguistic control protocol of a skilled operator. The method has been applied to pilot scale plants as well as in practical situations. The merits of this method and its usefulness to control engineering are discussed. An avenue for further work in this area is described where the need is to go beyond a purely descriptive approach, and means for implementing a prescriptive or a self-organizing system are explored.

2,011 citations


Journal ArticleDOI
Narendra1, Fukunaga
TL;DR: In this paper, a branch and bound-based feature subset selection algorithm is proposed to select the best subset of m features from an n-feature set without exhaustive search, which is computationally computationally unfeasible.
Abstract: A feature subset selection algorithm based on branch and bound techniques is developed to select the best subset of m features from an n-feature set. Existing procedures for feature subset selection, such as sequential selection and dynamic programming, do not guarantee optimality of the selected feature subset. Exhaustive search, on the other hand, is generally computationally unfeasible. The present algorithm is very efficient and it selects the best subset without exhaustive search. Computational aspects of the algorithm are discussed. Results of several experiments demonstrate the very substantial computational savings realized. For example, the best 12-feature set from a 24-feature set was selected with the computational effort of evaluating only 6000 subsets. Exhaustive search would require the evaluation of 2 704 156 subsets.

1,301 citations


Journal ArticleDOI
TL;DR: An experimental system is described in which certain features called moment invariants are extracted from binary television images and are used for automatic classification of aircraft types from optical images, exhibiting a significantly lower error rate than human observers.
Abstract: Although many systems for optical reading of printed matter have been developed and are now in wide use, comparatively little success has been achieved in the automatic interpretation of optical images of three-dimensional scenes. This paper is addressed to the latter problem and is specifically concerned with automatic recognition of aircraft types from optical images. An experimental system is described in which certain features called moment invariants are extracted from binary television images and are then used for automatic classification. This experimental system has exhibited a significantly lower error rate than human observers in a limited laboratory test involving 132 images of six aircraft types. Preliminary indications are that this performance can be extended to a wider class of objects and that identification can be accomplished in one second or less with a small computer.

800 citations


Journal ArticleDOI
Pease1
TL;DR: This paper explores the possibility of using a large-scale array of microprocessors as a computational facility for the execution of massive numerical computations with a high degree of parallelism.
Abstract: This paper explores the possibility of using a large-scale array of microprocessors as a computational facility for the execution of massive numerical computations with a high degree of parallelism. By microprocessor we mean a processor realized on one or a few semiconductor chips that include arithmetic and logical facilities and some memory. The current state of LSI technology makes this approach a feasible and attractive candidate for use in a macrocomputer facility.

549 citations


Journal ArticleDOI
TL;DR: A method for detecting sharp "corners" in a chain-coded plane curve is described and a measure for the prominence ("cornerity") of a corner is introduced.
Abstract: A method for detecting sharp "corners" in a chain-coded plane curve is described. A measure for the prominence ("cornerity") of a corner is introduced. The effectiveness of the method is illustrated by means of a number of examples.

531 citations


Journal ArticleDOI
Friedman1
TL;DR: A new criterion for deriving a recursive partitioning decision rule for nonparametric classification is presented and the resulting decision rule is asymptotically Bayes' risk efficient.
Abstract: A new criterion for deriving a recursive partitioning decision rule for nonparametric classification is presented The criterion is both conceptually and computationally simple, and can be shown to have strong statistical merit The resulting decision rule is asymptotically Bayes' risk efficient The notion of adaptively generated features is introduced and methods are presented for dealing with missing features in both training and test vectors

393 citations


Journal ArticleDOI
TL;DR: A set of orthogonal functions related to distinctive image features is presented, which allows efficient extraction of such boundary elements from digitized images with considerable improvements over, existing techniques, with a very moderate increase of computational cost.
Abstract: We study class of fast algorithms that extract object boundaries from digitized images. A set of orthogonal functions related to distinctive image features is presented, which allows efficient extraction of such boundary elements. The properties of these functions are. used to define new criteria for edge detection and a sequential algorithm is presented. Results indicate considerable improvements over, existing techniques, with a very moderate increase of computational cost.

378 citations


Journal ArticleDOI
TL;DR: A major component of the computational burden of the maximum entropy procedure is shown to be a two-dimensional convolution sum, which can be efficiently calculated by fast Fourier transform techniques.
Abstract: Two-dimensional digital image reconstruction is an important imaging process in many of the physical sciences. If the data are insufficient to specify a unique reconstruction, an additional criterion must be introduced, either implicitly or explicitly before the best estimate can be computed. Here we use a principle of maximum entropy, which has proven useful in other contexts, to design a procedure for reconstruction from noisy measurements. Implementation is described in detail for the Fourier synthesis problem of radio astronomy. The method is iterative and hence more costly than direct techniques; however, a number of comparative examples indicate that a significant improvement in image quality and resolution is possible with only a few iterations. A major component of the computational burden of the maximum entropy procedure is shown to be a two-dimensional convolution sum, which can be efficiently calculated by fast Fourier transform techniques.

262 citations


Journal ArticleDOI
TL;DR: A relaxation process is described and is applied to the detection of smooth lines and curves in noisy, real world images, effective even for curves of low contrast, and even when many curves lie close to one another.
Abstract: A relaxation process is described and is applied to the detection of smooth lines and curves in noisy, real world images. There are nine labels associated with each image point, eight labels indicating line segments at various orientations and one indicating the no-line case. Attached to each label is a probability. In the relaxation process, interaction takes place among the probabilities at neighboring points. This permits line segments in compatible orientations to strengthen one another, and incompatible segments to weaken one another. Similarly, no-line labels are reinforced by neighboring no-line labels and weakened by appropriately oriented line labels. This process converges, in only a few iterations, to a condition in which points lying on long curves have achieved high line probabilities, while other points have high no-line probabilities, There is some tendency, under this process, for curves to thicken; however, a thinning procedure can be incorporated to counteract this. The process is effective even for curves of low contrast, and even when many curves lie close to one another.

231 citations


Journal ArticleDOI
Hunt1
TL;DR: A model is used which explicitly includes nonlinear relations between intensity and film density, by use of the D-log E curve, and a maximum a posteriori (Bayes) estimate of the restored image is derived.
Abstract: Prior techniques in digital image restoration have assumed linear relations between the original blurred image intensity, the silver density recorded on film, and the film-grain noise. In this paper a model is used which explicitly includes nonlinear relations between intensity and film density, by use of the D-log E curve. Using Gaussian models for the image and noise statistics, a maximum a posteriori (Bayes) estimate of the restored image is derived. The MAP estimate is nonlinear, and computer implementation of the estimator equations is achieved by a fast algorithm based on direct maximization of the posterior density function. An example of the restoration method implemented on a digital image is shown.

218 citations


Journal ArticleDOI
TL;DR: The computational cost of template matching can be reduced by using only a subtemplate, and applying the rest of the template only when the subtemplate's degree of match exceeds a threshold.
Abstract: The computational cost of template matching can be reduced by using only a subtemplate, and applying the rest of the template only when the subtemplate's degree of match exceeds a threshold. A probabilistic analysis of this approach is given, with emphasis on the choice of subtemplate size to minimize the expected computational cost.

Journal ArticleDOI
TL;DR: A method for the sequential mapping of points in a high-dimensional space onto a plane is presented, where whenever a new point is mapped, its distgnces to two points previously mapped are exactly preserved.
Abstract: A method for the sequential mapping of points in a high-dimensional space onto a plane is presented. Whenever a new point is mapped, its distgnces to two points previously mapped are exactly preserved. On the resulting map, 2M -3 of the original distances can be exactly preserved. The mapping is based on the distances of a minimal spanning tree constructed from the points. All of the distances on the minimal spanning tree are exactly preserved.

Journal ArticleDOI
TL;DR: In this paper, on-line algorithms for division and multiplication are developed and it is assumed that the operands as well as the result flow through the arithmetic unit in a digit-by-digit, most significant digit first fashion.
Abstract: In this paper, on-line algorithms for division and multiplication are developed. It is assumed that the operands as well as the result flow through the arithmetic unit in a digit-by-digit, most significant digit first fashion. The use of a redundant digit set, at least for the digits of the result, is required.

Journal ArticleDOI
TL;DR: Methods of detecting the angles and sides of a simple closed curve are developed that use approximations to the curve that have varying degrees of coarseness, and construct a hierarchy of angles and side that describes the curve at any desired level of detail.
Abstract: Methods of detecting the angles and sides of a simple closed curve are developed. These methods use approximations to the curve that have varying degrees of coarseness, and construct a hierarchy of angles and sides that describes the curve at any desired level of detail. The results can be used to obtain natural polygonal approximations to the curve.

Journal ArticleDOI
Stenzel1, Kubitz, Garcia
TL;DR: A compact, fast, parallel multiplication scheme of the generation-reduction type using generalized Dadda-type pseudoadders for reduction and m X m multipliers for generation is discussed.
Abstract: This paper discusses a compact, fast, parallel multiplication scheme of the generation-reduction type using generalized Dadda-type pseudoadders for reduction and m X m multipliers for generation. The implications of present and future LSI are considered, a partitioning algorithm is presented, and the results obtained for a 24 X 24-bit implementation are discussed.

Journal ArticleDOI
TL;DR: STARAN® has a number of array modules, each with a multidimensional access (MDA) memory, which can be accessed in either the word direction or the bit-slice direction, making associative processing possible without the need for costly, custom-made logic-in-memory chips.
Abstract: STARAN® has a number of array modules, each with a multidimensional access (MDA) memory. The implementation of this memory with random-access memory (RAM) chips is described. Because data can be accessed in either the word direction or the bit-slice direction, associative processing is possible without the need for costly, custom-made logic-in-memory chips.

Journal ArticleDOI
TL;DR: The principal advantages of the data flow multiprocessor over conventional designs are reduced complexity of the processor-memory connection, greater use of pipelining, and a simpler representation and implementation of concurrent activity.
Abstract: This paper presents the architecture of a highly concurrent multiprocessor which runs programs expressed in data flow notation. Sequencing of data flow instruction execution depends only on the availability of operands required by instructions. Because data flow instructions have no side effects, unrelated instructions can be executed concurrently without interference if each has its required operands. The data flow multiprocessor is hierarchically constructed as a network of simple modules. All module interactions are asynchronous. The principal working elements of the machine are a set of activation processors, each of which performs the execution of one invocation of a data flow procedure held in a local memory within the processor. A pipeline of logical units within each processor executes several concurrently active instructions. All data flow operations are performed within single processors except procedure calls, which cause the creation of new activations in other processors, and operations on large data structures, which are performed by structure controller modules using values stored in a central memory. Concurrency within a data flow procedure provides a processor with something to do while a slow operation is being processed. The behavior of the machine has been specified by a formal description language and has been shown to correctly implement the data flow language. The principal advantages of the data flow multiprocessor over conventional designs are reduced complexity of the processor-memory connection, greater use of pipelining, and a simpler representation and implementation of concurrent activity.

Journal ArticleDOI
TL;DR: This paper presents an LSI-oriented approach to computer-maintained rectangular arrays of programmable logic that allows a computer to embed a perfect, very reliable digital machine on an entire flawed semiconductor wafer.
Abstract: This paper presents an LSI-oriented approach to computer-maintained rectangular arrays of programmable logic. No signal line connects more than a few cells. A loading mechanism in each cell allows a computer directly connected to one cell to load any good cell that is not walled off by flawed cells. A loading arm is grown by programming cells to form a path that carries loading information. Cell mechanisms allow a computer to monitor the growth of a loading arm, and to change the arm's route or retract the arm to avoid faulty cells. Properly loaded cells carry test signals between a tested cell and a testing computer directly connected to only a few cells. The computer discovers the faulty cells in an array, and repairs the array by loading the array's good cells. This allows a computer to embed a perfect, very reliable digital machine on an entire flawed semiconductor wafer.

Journal ArticleDOI
TL;DR: It is shown that an optimal tree can be recursively constructed through the application of invariant imbedding (dynamic programming) and an algorithm is detailed which embodies this recursive approach.
Abstract: We consider the problem of optimally partitioning an n-dimensional lattice, L = L, X X LN, where Lj is a one-dimensional lattice with kj elements, by means of a binary tree into specified (labeled) subsets of L Such lattices arise from problems in pattern classification, in nonlinear regression, in defining logical equations, and a number of related areas When viewed as the partitioning of a vector space, each point in the lattice corresponds to a subregion of the space which is relatively homogeneous with respect to classification or range of a dependent variable Optimality is defined in terms of a general cost function which includes the following: 1) min-max path length (i e, minimize the maximum number of nodes traversed in making a decision); 2) minimum number of nodes in the tree; and 3) expected path length It is shown that an optimal tree can be recursively constructed through the application of invariant imbedding (dynamic programming) An algorithm is detailed which embodies this recursive approach The algorithm allows the assignment of a "don't care" label to elements of L

Journal ArticleDOI
Ashjaee1, Reddy
TL;DR: It is shown that the proposed TSC checker is applicable to certain Berger codes and residue codes and a class of codes equivalent to Berger codes is derived for which the proposedChecker is TSC.
Abstract: Design of totally self-checking (TSC) checkers for separable codes is studied. Assuming a specific checker design, a sufficient condition on separable codes is derived such that the assumed checker is TSC. It is shown that the proposed checker is applicable to certain Berger codes and residue codes. A class of codes equivalent to Berger codes is derived for which the proposed checker is TSC.

Journal ArticleDOI
TL;DR: The theory of permutation groups is used to aid in the analysis of the cycle structures of the different interconnection networks and the importance of the Cycle structure to the SIMD machine architect is discussed.
Abstract: Various techniques for evaluating and comparing interconnection networks for SIMD machines are presented. These techniques are demonstrated by using them to analyze the networks that have been proposed in the literature. The model of SIMD machines used in the first part of the paper requires all data transfers between processing elements to be representable as permutations on the processing element addresses. We use the theory of permutation groups to aid in the analysis of the cycle structures of the different interconnection networks and discuss the importance of the cycle structure to the SIMD machine architect. A processing element address masking scheme, to determine which processing elements will be active, is introduced. The effects of this masking system when used with different networks are examined. Model independent techniques for proving lower bounds on the time required for a network to simulate a particular interconnection are presented. These techniques are used to prove a lower time bound on the simulation of each network by each of the other networks.

Journal ArticleDOI
Pavlidis1
TL;DR: The problem of locating optimally the breakpoints in a continuous piecewise-linear approximation is examined and the integral square error E of the approximation is used as the cost function and the application of Newton's method for solving the problem is applied.
Abstract: The problem of locating optimally the breakpoints in a continuous piecewise-linear approximation is examined. The integral square error E of the approximation is used as the cost function. Its first and second derivatives are evaluated and this allows the application of Newton's method for solving the problem. Initialization is performed with the help of the split-and-merge method [8]. The evaluation of the derivatives is performed for both waveforms and contours. Examples of implementation of both cases are shown.

Journal ArticleDOI
TL;DR: A parallel computational method is described that provides a simple and fast algorithm for the evaluation of polynomials, certain rational functions and arithmetic expressions, solving a class of systems of linear equations, or performing the basic arithmetic operations in a fixed-point number representation system.
Abstract: A parallel computational method, amenable for efficient hardware-level implementation, is described. It provides a simple and fast algorithm for the evaluation of polynomials, certain rational functions and arithmetic expressions, solving a class of systems of linear equations, or performing the basic arithmetic operations in a fixed-point number representation system. The time required to perform the computation is of the order of m carry-free addition operations, m being the number of digits in the solution. In particular, the method is suitable for fast evaluation of mathematical functions in hardware.

Journal ArticleDOI
TL;DR: The use of a variation of chain encoding of binary boundary images is described in the context of a scene analysis system that can learn single views of the three-dimensional curved objects and can later recognize various partialViews of the same projections of the objects.
Abstract: The use of a variation of chain encoding of binary boundary images is described in the context of a scene analysis system. The system can learn single views of the three-dimensional curved objects and can later recognize various partial views of the same projections of the objects. The system is completely automatic except for specifying whether the object is to be learned or recognized.

Journal ArticleDOI
TL;DR: The object of this paper is to bring together several models of interleaved or parallel memory systems and to expose some of the underlying assumptions about the address streams in each model.
Abstract: The object of this paper is to bring together several models of interleaved or parallel memory systems and to expose some of the underlying assumptions about the address streams in each model. We derive the performance for each model, either analytically or by simulation, and discuss why it yields better or worse performance than other models (e.g., because of dependencies in the address stream or hardware queues, etc.). We also show that the performance of a properly designed system can be a linear rather than a square root function of the number of memories and processors.

Journal ArticleDOI
TL;DR: A mathematical model is presented for determining the extent of memory interference in multiprocessor systems that takes into account the numbers of processors and memory modules in the system and their relative service times, as well as the patterns of memory accesses made by the processors.
Abstract: This paper presents a mathematical model for determining the extent of memory interference in multiprocessor systems. The model takes into account the numbers of processors and memory modules in the system and their relative service times, as well as the patterns of memory accesses made by the processors. The results predicted by the model are compared with simulation results and with results from other exact or approximate models, where these exist.

Journal ArticleDOI
Mitchell1, Myers1, Boyne1
TL;DR: In this paper, the relative frequency of local extremes in grey level is used as the principal measure for image texture analysis, which is computationally simple and can be implemented in hardware for real-time analysis.
Abstract: A new technique for image texture analysis is described which uses the relative frequency of local extremes in grey level as the principal measure. This method is invariant to multiplicative gain changes (such as caused by changes in illumination level or film processing) and is invariant to image resolution and sampling rate if the image is not undersampled. The algorithm described is computationally simple and can be implemented in hardware for real-time analysis. Comparisons are made between this new method and the spatial dependence method of texture analysis using 49 samples of each of eight textures. The new method seems just as accurate and considerably faster.

Journal ArticleDOI
TL;DR: Improved methods for gray-scale registration and for choosing a best seam path with specified endpoints using dynamic programming are described.
Abstract: A previous report described techniques for creating photomosaics: first, the two overlapping images are brought into register; second, a seam from the top of the overlap region to its bottom is tracked one row at a time; finally, the resultant artificial edge introduced by the seam is locally smoothed by a ramp function. This correspondence describes improved methods for gray-scale registration and for choosing a best seam path with specified endpoints using dynamic programming.

Journal ArticleDOI
TL;DR: The Hearsay II speech-understanding system (HSII) as discussed by the authors is an implementation of a knowledge-based multiprocessing artificial intelligence (AI) problem-solving organization.
Abstract: The Hearsay II speech-understanding system (HSII) (Lesser et al [11], Fennell [9], and Erman and Lesser [6]) is an implementation of a knowledge-based multiprocessing artificial intelligence (AI) problem-solving organization. HSII is intended to represent a problem-solving organization which is applicable for implementation in a parallel hardware environment such as C.mmp (Bell et al [2]). The primary characteristics of this organization include: 1) multiple, diverse, independent and asynchronously executing knowledge sources (KS's), 2) cooperating (in terms of control) via a generalized form of the hypothesize-and-test paradigm involving the data-directed invocation of KS processes, and 3) communicating (in terms of data) via a shared blackboard-like data base in which the current data state is held in a homogeneous, multidimensional, directed-graph structure. The object of this paper is to explore several of the ramifications of such a problem-solving organization by examining the mechanisms and policies underlying HSII which are necessary for supporting its organization as a multiprocessing system. In addition, a multiprocessor simulation study is presented which details the effects of actually implementing such a parallel organization for use in a particular application area, that of speech understanding.

Journal ArticleDOI
Yachida1, Tsuji
TL;DR: A versatile machine vision system that can recognize a variety of complex industrial parts and measure the necessary parameters for assembly, such as the locations of screw holes is described.
Abstract: As a step to automate assembly of various industrial parts, this paper describes a versatile machine vision system that can recognize a variety of complex industrial parts and measure the necessary parameters for assembly, such as the locations of screw holes. Emphasis is given to a method for extracting useful features from the scene data for complex industrial parts so that accurate recognition of them is possible. The proposed method has the following features: 1) simple features are detected first in the scene and more complex features are examined later, using the locations of the previously found features; 2) the system is provided with a high-level supervisor that analyzes the current information obtained from the scene and structural models of various objects, and proposes the feartures to be examined next for recognizing the objects in the scene; 3) the supervisor has problem-solving capabilities to select the most promising feature among many others; 4) the structural models are used to suggest the locations of the features to be examined; and 5) several sophisticated feature extractors are used to detect the complex features. An effort is also made to make the system versatile so that it can be readily applied to a variety of different industrial parts. The proposed system has been tested on several sets of parts for small industrial gasoline engines and the results were satisfactory.