scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Pattern Analysis and Machine Intelligence in 1980"


Journal ArticleDOI
TL;DR: Experimental results show that in most cases the techniques developed in this paper are readily adaptable to real-time image processing.
Abstract: Computational techniques involving contrast enhancement and noise filtering on two-dimensional image arrays are developed based on their local mean and variance. These algorithms are nonrecursive and do not require the use of any kind of transform. They share the same characteristics in that each pixel is processed independently. Consequently, this approach has an obvious advantage when used in real-time digital image processing applications and where a parallel processor can be used. For both the additive and multiplicative cases, the a priori mean and variance of each pixel is derived from its local mean and variance. Then, the minimum mean-square error estimator in its simplest form is applied to obtain the noise filtering algorithms. For multiplicative noise a statistical optimal linear approximation is made. Experimental results show that such an assumption yields a very effective filtering algorithm. Examples on images containing 256 × 256 pixels are given. Results show that in most cases the techniques developed in this paper are readily adaptable to real-time image processing.

2,701 citations


Journal ArticleDOI
TL;DR: The theory of Zangwill is used to prove that arbitrary sequences generated by these (Picard iteration) procedures always terminates at a local minimum, or at worst, always contains a subsequence which converges to aLocal minimum of the generalized least squares objective functional which defines the problem.
Abstract: In this paper the convergence of a class of clustering procedures, popularly known as the fuzzy ISODATA algorithms, is established. The theory of Zangwill is used to prove that arbitrary sequences generated by these (Picard iteration) procedures always terminates at a local minimum, or at worst, always contains a subsequence which converges to a local minimum of the generalized least squares objective functional which defines the problem.

965 citations


Journal ArticleDOI
TL;DR: The results obtained indicate that the SGLDM is the most powerful algorithm of the four considered, and that the GLDM is more powerful than the PSM.
Abstract: An evaluation of the ability of four texture analysis algorithms to perform automatic texture discrimination will be described. The algorithms which will be examined are the spatial gray level dependence method (SGLDM), the gray level run length method (GLRLM), the gray level difference method (GLDM), and the power spectral method (PSM). The evaluation procedure employed does not depend on the set of features used with each algorithm or the pattern recognition scheme. Rather, what is examined is the amount of texturecontext information contained in the spatial gray level dependence matrices, the gray level run length matrices, the gray level difference density functions, and the power spectrum. The comparison will be performed in two steps. First, only Markov generated textures will be considered. The Markov textures employed are similar to the ones used by perceptual psychologist B. Julesz in his investigations of human texture perception. These Markov textures provide a convenient mechanism for generating certain example texture pairs which are important in the analysis process. In the second part of the analysis the results obtained by considering only Markov textures will be extended to all textures which can be represented by translation stationary random fields of order two. This generalization clearly includes a much broader class of textures than Markovian ones. The results obtained indicate that the SGLDM is the most powerful algorithm of the four considered, and that the GLDM is more powerful than the PSM.

957 citations


Journal ArticleDOI
TL;DR: An algorithm for matching images of real world scenes is presented, which quickly converges to good estimates of disparity, which reflect the spatial organization of the scene.
Abstract: An algorithm for matching images of real world scenes is presented The matching is a specification of the geometrical disparity between the images and may be used to partially reconstruct the three-dimensional structure of the scene Sets of candidate matching points are selected independently in each image These points are the locations of small, distinct features which are likely to be detectable in both images An initial network of possible matches between the two sets of candidates is constructed Each possible match specifies a possible disparity of a candidate point in a selected reference image An initial estimate of the probability of each possible disparity is made, based on the similarity of subimages surrounding the points These estimates are iteratively improved by a relaxation labeling technique making use of the local continuity property of disparity that is a consequence of the continuity of real world surfaces The algorithm is effective for binocular parallax, motion parallax, and object motion It quickly converges to good estimates of disparity, which reflect the spatial organization of the scene

891 citations


Journal ArticleDOI
TL;DR: Two algorithms for parametric piecewise polynomial evaluation and generation are described and are shown to generalize to new algorithms for obtaining curve and surface intersections and for the computer display of parametric curves and surfaces.
Abstract: Two algorithms for parametric piecewise polynomial evaluation and generation are described. The mathematical development of these algorithms is shown to generalize to new algorithms for obtaining curve and surface intersections and for the computer display of parametric curves and surfaces.

538 citations


Journal ArticleDOI
TL;DR: A system capable of analyzing image sequences of human motion is described, structured as a feedback loop between high and low levels: predictions are made at the semantic level and verifications are sought at the image level.
Abstract: A system capable of analyzing image sequences of human motion is described. The system is structured as a feedback loop between high and low levels: predictions are made at the semantic level and verifications are sought at the image level. The domain of human motion lends itself to a model-driven analysis, and the system includes a detailed model of the human body. All information extracted from the image is interpreted through a constraint network based on the structure of the human model. A constraint propagation operator is defined and its theoretical properties outlined. An implementation of this operator is described, and results of the analysis system for short image sequences are presented.

446 citations


Journal ArticleDOI
TL;DR: The paper formulates the theoretical framework and a method for inferring general and optimal descriptions of object classes from examples of classification or partial descriptions and an experimental computer implementation of the method is briefly described and illustrated by an example.
Abstract: The determination of pattern recognition rules is viewed as a problem of inductive inference, guided by generalization rules, which control the generalization process, and problem knowledge rules, which represent the underlying semantics relevant to the recognition problem under consideration. The paper formulates the theoretical framework and a method for inferring general and optimal (according to certain criteria) descriptions of object classes from examples of classification or partial descriptions. The language for expressing the class descriptions and the guidance rules is an extension of the first-order predicate calculus, called variable-valued logic calculus VL21. VL21 involves typed variables and contains several new operators especially suited for conducting inductive inference, such as selector, internal disjunction, internal conjunction, exception, and generalization. Important aspects of the theory include: 1) a formulation of several kinds of generalization rules; 2) an ability to uniformly and adequately handle descriptors (i.e., variables, functions, and predicates) of different type (nominal, linear, and structured) and of different arity (i.e., different number of arguments); 3) an ability to generate new descriptors, which are derived from the initial descriptors through a rule-based system (i.e., an ability to conduct the so called constructive induction); 4) an ability to use the semantics underlying the problem under consideration. An experimental computer implementation of the method is briefly described and illustrated by an example.

426 citations


Journal ArticleDOI
TL;DR: Algorithms for shape analysis are reviewed and classified under various criteria, whether they examine the boundary only or the whole area and whether they describe the original picture in terms of scalar measurements or through structural descriptions.
Abstract: Algorithms for shape analysis are reviewed and classified under various criteria, whether they examine the boundary only or the whole area and whether they describe the original picture in terms of scalar measurements or through structural descriptions. The emphasis is on methodologies which have been popular during the last five years and among them, those which are information preserving.

381 citations


Journal ArticleDOI
TL;DR: A set of three-dimensional moment invariants which are invariant under size, orientation, and position change is proposed which is highly significant in compressing the data which are needed in three- dimensional object recognition.
Abstract: Recognition of three-dimensional objects independent of size, position, and orientation is an important and difficult problem of scene analysis. The use of three-dimensional moment invariants is proposed as a solution. The generalization of the results of two-dimensional moment invariants which had linked two-dimensional moments to binary quantics is done by linking three-dimensional moments to ternary quantics. The existence and number of nth order moments in two and three dimensions is explored. Algebraic invariants of several ternary forms under different orthogonal transformations are derived by using the invariant property of coefficients of ternary forms. The result is a set of three-dimensional moment invariants which are invariant under size, orientation, and position change. This property is highly significant in compressing the data which are needed in three-dimensional object recognition. Empirical examples are also given.

376 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of determining the 3D model and movement of an object from a sequence of two-dimensional images is discussed, and a solution to this problem depends on solving a system of nonlinear equations using a modified least squared error method.
Abstract: Discusses the problem of determining the three-dimensional model and movement of an object from a sequence of two-dimensional images A solution to this problem depends on solving a system of nonlinear equations using a modified least-squared error method Two views of six points or three views of four points are needed to provide an overdetermined set of equations when the images are noisy It is shown, however, that this numerical method is not very accurate unless the images of considerably more points are used

362 citations


Journal ArticleDOI
TL;DR: The results are compared with previous work on probabilistic relaxation labeling, and examples are given from the image segmentation domain, to applications of the new scheme in text processing.
Abstract: Let a vector of probabilities be associated with every node of a graph. These probabilities define a random variable representing the possible labels of the node. Probabilities at neighboring nodes are used iteratively to update the probabilities at a given node based on statistical relations among node labels. The results are compared with previous work on probabilistic relaxation labeling, and examples are given from the image segmentation domain. References are also given to applications of the new scheme in text processing.

Journal ArticleDOI
TL;DR: An algorithm is proposed for skeletonization of 3-D images that allows local decisions in the erosion process and a table of the decisions for all possible configurations is given.
Abstract: An algorithm is proposed for skeletonization of 3-D images. The criterion to preserve connectivity is given in two versions: global and local. The latter allows local decisions in the erosion process. A table of the decisions for all possible configurations is given in this paper. The algorithm using this table can be directly implemented both on general purpose computers and on dedicated machinery.

Journal ArticleDOI
TL;DR: The purpose behind this system is to provide an evolving research prototype for experimenting with the analysis of certain classes of biomedical imagery, and for refining and quantifying the body of relevant medical knowledge.
Abstract: A framework for the abstraction of motion concepts from sequences of images by computer is presented. The framework includes: 1) representation of knowledge for motion concepts that is based on semantic networks; and 2) associated algorithms for recognizing these motion concepts. These algorithms implement a form of feedback by allowing competition and cooperation among local hypotheses. They also allow a change of attention mechanism that is based on similarity links between knowledge units, and a hypothesis ranking scheme based on updating of certainty factors that reflect the hypothesis set inertia. The framework is being realized with a system called ALVEN. The purpose behind this system is to provide an evolving research prototype for experimenting with the analysis of certain classes of biomedical imagery, and for refining and quantifying the body of relevant medical knowledge.

Journal ArticleDOI
TL;DR: An algorithm is presented which segments the points of an MLD of a wire-frame man into body parts, and the relationship of this algorithm to previous theories of MLD perception and actual human performance is discussed.
Abstract: Lights is a system for the interpretation of simple moving light displays (MLD) of jointed objects against a stationary background. The displays studied differ from those examined by previous researchers in that 1) objects are represented by a relatively small number of points, 2) objects are not rigid, and 3) the viewing geometry is such that highly varying degrees of perspective distortion occur. An algorithm is presented which segments the points of an MLD of a wire-frame man into body parts. The relationship of this algorithm to previous theories of MLD perception and actual human performance is discussed.

Journal ArticleDOI
TL;DR: It was made clear that the DF-expression is a very effective technique as a data compression for binary pictorial patterns not only because it yields high data compression but also because its coding and decoding algorithms are very feasible.
Abstract: A method of representing a binary pictorial pattern is developed. Its original idea comes from a sequence of terminal symbols of a context-free grammar. It is a promising technique of data compression for ordinary binary-valued pictures such as texts, documents, charts, etc. Fundamental notions like complexity, primitives, simplifications, and other items about binary-valued pictures are introduced at the beginning. A simple context-free grammar G is also introduced. It is shown that every binary-valued picture is interpretable as a terminal sequence of that G. The DF-expression is defined as the reduced terminal sequence of G. It represents the original picture in every detail and contains no surplus data for reproducing it. A quantitative discussion about the total data of a DF-expression leads to the conclusion that any binary-valued picture with complexity less than 0.47 is expressed by the DF-expression with fewer data than the original ones. The coding algorithm of original data into the DF-expression is developed. It is very simple and recursively executable. Experiments were carried out using a PDS (photo digitizing system), where test pictures were texts, charts, diagrams, etc. with 20 cm × 20 cm size. Data compression techniques in facsimile were also simulated on the same test pictures. Throughout these studies it was made clear that the DF-expression is a very effective technique as a data compression for binary pictorial patterns not only because it yields high data compression but also because its coding and decoding algorithms are very feasible.

Journal ArticleDOI
TL;DR: Four classification algorithms-discriminant functions when classifying individuals into two multivariate populations are compared and it is shown that the classification error EPN depends on the structure of a classification algorithm, asymptotic probability of misclassification P¿, and the ratio of learning sample size N to dimensionality p:N/p.
Abstract: This paper compares four classification algorithms-discriminant functions when classifying individuals into two multivariate populations. The discriminant functions (DF's) compared are derived according to the Bayes rule for normal populations and differ in assumptions on the covariance matrices' structure. Analytical formulas for the expected probability of misclassification EPN are derived and show that the classification error EPN depends on the structure of a classification algorithm, asymptotic probability of misclassification P?, and the ratio of learning sample size N to dimensionality p:N/p for all linear DF's discussed and N2/p for quadratic DF's. The tables for learning quantity H = EPN/P? depending on parameters P?, N, and p for four classifilcation algorithms analyzed are presented and may be used for estimating the necessary learning sample size, detennining the optimal number of features, and choosing the type of the classification algorithm in the case of a limited learning sample size.

Journal ArticleDOI
TL;DR: This paper presents the development and evaluation of a visual texture feature extraction method based on a stochastic field model of texture involving autocorrelation function measurement of a texture field, combined with histogram representation of a statistically decorrelated version of the texture field.
Abstract: This paper presents the development and evaluation of a visual texture feature extraction method based on a stochastic field model of texture. Results of recent visual texture discrimination experiments are reviewed in order to establish necessary and sufficient conditions for texture features that are in agreement with human discrimination. A texture feature extraction technique involving autocorrelation function measurement of a texture field, combined with histogram representation of a statistically decorrelated version of the texture field, is introduced. The texture feature extraction method is evaluated in terms of a Bhattacharyya distance measure.

Journal ArticleDOI
TL;DR: A method is presented for partitioning a scene into regions corresponding to surfaces with distinct velocities, effective in determining object boundaries not easily found using analysis applied only to a single image frame.
Abstract: A method is presented for partitioning a scene into regions corresponding to surfaces with distinct velocities. Both motion and contrast information are incorporated into the segmentation process. Velocity estimates for each point in a scene are obtained using a local nonmatching technique not dependent on any prior boundary determination. The actual segmentation is accomplished using a region merging procedure which combines regions based on similarities in both brightness and motion. The method is effective in determining object boundaries not easily found using analysis applied only to a single image frame.

Journal ArticleDOI
TL;DR: This study develops optimum thresholds for admittance of unknown samples into the set of presently known classes and represents a new dimensionality of the learning system structure in that estimates of the domains of the known classes are developed in addition to learning of the discrimination among these classes.
Abstract: The scope of the classical k-NN classification techniques is enlarged under this study to cover partially exposed environments. The modified classification system structure required for successful operation in environments, wherein all the inherent pattern classes are not exposed to the system prior to deployment, is developed and illustrated with the aid of a specific classification rule-the neighborhood census rule (NCR). Admittedly, alternative rules can be visualized to fit this modified structure. However, this study concentrates on the use of NCR to bring out the underlying philosophy and develops optimum thresholds for admittance of unknown samples into the set of presently known classes. These thresholds are learned from the available training samples of these classes. This learning represents a new dimensionality of the learning system structure in that estimates of the domains of the known classes are developed in addition to learning of the discrimination among these classes. This facilitates identification of samples belonging to the classes previously unexposed to the recognition system. Experimental results are also presented in support of the proposed concepts and methodology for operation in partially exposed environments.

Journal ArticleDOI
TL;DR: New high-speed algorithms together with fast digital hardware have produced a system for missile and aircraft identification and tracking that possesses a degree of ``intelligence'' not previously implemented in a real-time tracking system.
Abstract: Object identification and tracking applications of pattern recognition at video rates is a problem of wide interest, with previous attempts limited to very simple threshold or correlation (restricted window) methods New high-speed algorithms together with fast digital hardware have produced a system for missile and aircraft identification and tracking that possesses a degree of ``intelligence'' not previously implemented in a real-time tracking system Adaptive statistical clustering and projection-based classification algorithms are applied in real time to identify and track objects that change in appearance through complex and nonstationary background/foreground situations Fast estimation and prediction algorithms combine linear and quadratic estimators to provide speed and sensitivity Weights are determined to provide a measure of confidence in the data and resulting decisions Strategies based on maximizing the probability of maintaining track are developed This paper emphasizes the theoretical aspects of the system and discusses the techniques used to achieve real-time implementation

Journal ArticleDOI
TL;DR: A set of algorithms used to perform segmentation of natural scenes through boundary analysis using preprocessing, differentiation using a very simple operator, relaxation using case analysis, and postprocessing.
Abstract: This paper describes a set of algorithms used to perform segmentation of natural scenes through boundary analysis. The techniques include preprocessing, differentiation using a very simple operator, relaxation using case analysis, and postprocessing. The system extracts line segments as connected sets of edges, labels them, and computes features for them such as length and confidence.

Journal ArticleDOI
J. K. Mui1, King-Sun Fu
TL;DR: The binary tree classifier with a quadratic discriminant function using up to ten features at each nonterminal node was applied to classify 1294 cells into one of 17 classes.
Abstract: Describes the interactive design of a binary tree classifier. The binary tree classifier with a quadratic discriminant function using up to ten features at each nonterminal node was applied to classify 1294 cells into one of 17 classes. Classification accuracies of 83 percent and 77 percent were obtained by the binary tree classifier using the resubstitution and the leave-one-out methods of error estimation, respectively, whereas the existing results using the same data are 71 percent and 67 percent using a single stage linear classifier with 20 features and the resubstitution and the half-and-half methods of error estimation, respectively.

Journal ArticleDOI
Linda G. Shapiro1
TL;DR: A shape matching procedure that uses a tree search with look-ahead to find mappings from a prototype shape to a candidate shape has been developed and an experimental Snobol4 implementation has been used to test the program on hand-printed character data with favorable results.
Abstract: Shape description and recognition is an important and interesting problem in scene analysis. Our approach to shape description is a formal model of a shape consisting of a set of primitives, their properties, and their interrelationships. The primitives are the simple parts and intrusions of the shape which can be derived through the graph-theoretic clustering procedure described in [31]. The interrelationships are two ternary relations on the primitives: the intrusion relation which relates two simple parts that join to the intrusion they surround and the protrusion relation which relates two intrusions to the protrusion between them. Using this model, a shape matching procedure that uses a tree search with look-ahead to find mappings from a prototype shape to a candidate shape has been developed. An experimental Snobol4 implementation has been used to test the program on hand-printed character data with favorable results.

Journal ArticleDOI
TL;DR: It is shown that the problem of determining a partition into a given number of clusters with minimum diameter or with maximum split can be solved by the classical single-link clustering algorithm and by a graph-theoretic algorithm involving the optimal coloration of a sequence of partial graphs.
Abstract: Cluster analysis is concerned with the problem of partitioning a given set of entities into homogeneous and well-separated subsets called clusters. The concepts of homogeneity and of separation can be made precise when a measure of dissimilarity between the entities is given. Let us define the diameter of a partition of the given set of entities into clusters as the maximum dissimilarity between any pair of entities in the same cluster and the split of a partition as the minimum dissimilarity between entities in different clusters. The problems of determining a partition into a given number of clusters with minimum diameter (i.e., a partition of maximum homogeneity) or with maximum split (i.e., a partition of maximum separation) are first considered. It is shown that the latter problem can be solved by the classical single-link clustering algorithm, while the former can be solved by a graph-theoretic algorithm involving the optimal coloration of a sequence of partial graphs, described in more detail in a previous paper. A partition into a given number of clusters will be called efficient if and only if there exists no partition into at most the same number of clusters with smaller diameter and not smaller split or with larger split and not larger diameter. Two efficient partitions are called equivalent if and only if they have the same values for the split and for the diameter.

Journal ArticleDOI
TL;DR: A fingerprint classification procedure using a computer is described, which classifies the prints into one of ten defined types using a syntactic approach based on strings of symbols.
Abstract: A fingerprint classification procedure using a computer is described. It classifies the prints into one of ten defined types. The procedure is implemented using PICAP (picture array processor). The picture processing system includes a TV camera input and a special picture processor. The first part of the procedure is a transformation of the original print to a sampling matrix, where the dominant direction of the ridges for each subpicture is indicated. After smoothing, the lines in this pattern are traced out and converted to strings of symbols. Finally, a syntactic approach is adopted to make the type classification based on this string of symbols.

Journal ArticleDOI
W. A. Perkins1
TL;DR: A new method for segmenting images using abrupt changes in intensity (edge points) to separate regions of smoothly varying intensity is discussed, using an expansion-contraction technique in which the edge regions are first expanded to close gaps and then contracted after the separate uniform regions have been identified.
Abstract: A new method for segmenting images using abrupt changes in intensity (edge points) to separate regions of smoothly varying intensity is discussed. Region segmentation using edge points has not been very successful in the past because small gaps would allow merging of dissimilar regions. The present method uses an expansion-contraction technique in which the edge regions are first expanded to close gaps and then contracted after the separate uniform regions have been identified. In order to preserve small uniform regions, the process is performed iteratively from small to large expansions with no expansion for edge regions that separate different uniform regions. The final result is a set of uniform intensity regions (usually less than 100) and a set of edge boundary regions. The program has successfully segmented scenes with industrial parts, landscapes, and integrated circuit chips.

Journal ArticleDOI
TL;DR: Robustness of performance in the presence of many uncertainty relationships can be achieved by eliciting from the expert a segmentation of knowledge that will also provide a rich network of deterministic relationships to interweave the space of hypotheses.
Abstract: The major AI problems that arise in designing a consultation program involve choices of knowledge representations, diagnostic interpretation strategies, and treatment planning strategies. The need to justify decisions and update the knowledge base in the light of new research findings places a premium on the modularity of a representation and the ease with which its reasoning procedures can be explained. In both diagnosis and treatment decisions, the relative advantages and disadvantages of different schemes for quantifying the uncertainty of inferences raises difficult issues of a formal logical nature, as well as many specific practical problems of system design. An important insight that has resulted from the design of several artificial intelligence systems is that robustness of performance in the presence of many uncertainty relationships can be achieved by eliciting from the expert a segmentation of knowledge that will also provide a rich network of deterministic relationships to interweave the space of hypotheses.

Journal ArticleDOI
TL;DR: A knowledge-based interactive sequential diagnostic system is introduced which provides for diagnosis of multiple disorders in several body systems and is capable of explaining to the user the reasoning process for its decisions.
Abstract: A knowledge-based interactive sequential diagnostic system is introduced which provides for diagnosis of multiple disorders in several body systems. The knowledge base consists of disorder patterns in a hierarchical structure that constitute the background medical information required for diagnosis in the domain under consideration (emergency and critical care medicine, in our case). Utilizing this knowledge base, the diagnostic process is driven by a multimembership classification algorithm for diagnostic assessment as well as for information acquisition [1]. A key characteristic of the system is congenial man-machine interface which comes to expression in, for instance, the flexibility it offers to the user in controlling its operation. At any stage of the diagnostic process the user may decide on an operation strategy that varies from full user control, through mixed initiative to full system control. Likewise, the system is capable of explaining to the user the reasoning process for its decisions. The model is independent of the knowledge base, thereby permitting continuous update of the knowledge base, as well as expansions to include disorders from other disciplines. The information structure lends itself to compact storage and provides for efflcient computation. Presently, the system contains 53 high-level disorders which are diagnosed by means of 587 medical findings.

Journal ArticleDOI
TL;DR: This work presents a method for deriving depth information from a moving image where the camera is moving through a real world scene by refines a simple surface model based on error measures that are derived by interimage comparisons of point values.
Abstract: Presents a method for deriving depth information from a moving image where the camera is moving through a real world scene. The method refines a simple surface model based on error measures that are derived by interimage comparisons of point values.

Journal ArticleDOI
TL;DR: A versatile technique for designing computer algorithms for separating multiple-dimensional data (feature vectors) into two classes, referred to as classifiers, that achieve nearly Bayes-minimum error rates while requiring relatively small amounts of memory.
Abstract: We describe a versatile technique for designing computer algorithms for separating multiple-dimensional data (feature vectors) into two classes. We refer to these algorithms as classifiers. Our classifiers achieve nearly Bayes-minimum error rates while requiring relatively small amounts of memory. Our design procedure finds a set of close-opposed pairs of clusters of data. From these pairs the procedure generates a piecewise-linear approximation of the Bayes-optimum decision surface. A window training procedure on each linear segment of the approximation provides great flexibility of design over a wide range of class densities. The data consumed in the training of each segment are restricted to just those data lying near that segment, which makes possible the construction of efficient data bases for the training process. Interactive simplification of the classifier is facilitated by an adjacency matrix and an incidence matrix. The adjacency matrix describes the interrelationships of the linear segments {£i}. The incidence matrix describes the interrelationships among the polyhedrons formed by the hyperplanes containing {£i}. We exploit switching theory to minimize the decision logic.