scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Systems Science and Cybernetics in 1966"


Journal ArticleDOI
TL;DR: The theory of the value of information that arises from considering jointly the probabilistic and economic factors that affect decisions is discussed and illustrated and it is found that numerical values can be assigned to the elimination or reduction of any uncertainty.
Abstract: The information theory developed by Shannon was designed to place a quantitative measure on the amount of information involved in any communication. The early developers stressed that the information measure was dependent only on the probabilistic structure of the communication process. For example, if losing all your assets in the stock market and having whale steak for supper have the same probability, then the information associated with the occurrence of either event is the same. Attempts to apply Shannon's information theory to problems beyond communications have, in the large, come to grief. The failure of these attempts could have been predicted because no theory that involves just the probabilities of outcomes without considering their consequences could possibly be adequate in describing the importance of uncertainty to a decision maker. It is necessary to be concerned not only with the probabilistic nature of the uncertainties that surround us, but also with the economic impact that these uncertainties will have on us. In this paper the theory of the value of information that arises from considering jointly the probabilistic and economic factors that affect decisions is discussed and illustrated. It is found that numerical values can be assigned to the elimination or reduction of any uncertainty. Furthermore, it is seen that the joint elimination of the uncertainty about a number of even independent factors in a problem can have a value that differs from the sum of the values of eliminating the uncertainty in each factor separately.

911 citations



Journal ArticleDOI
TL;DR: An adaptive pattern classification system is described that does not require a priori knowledge of the probability density of the pattern vectors for each class, as do the classical statistical techniques.
Abstract: Adaptive pattern classification is the assignment of patterns to classes based on typical patterns or training samples, used by the system to determine the decision procedure. The system is adaptive in the sense that the decision procedure is optimized according to some criterion of the system's performance on the training samples. An adaptive pattern classification system is described that does not require a priori knowledge of the probability density of the pattern vectors for each class, as do the classical statistical techniques. Any decision rule, consisting of a discriminant function, that is a linear combination of arbitrary scalar functions of the pattern vector, may be chosen on the basis of a priori knowledge about the classes, engineering judgment, and economic considerations. The system optimizes itself by adjustment of the decision parameters according to a weighted mean-square-error performance criterion, using a multivariable search technique. The proposed performance criterion is well suited for self-optimizing search procedures. It also has the property that, as the number of training samples approaches infinity, the resulting disciminant function belongs to the class of discriminant functions, chosen at the outset, which approximates the optimum Baye's discriminant function with minimum variance. Some results from simulation studies are presented which include comparison with classical statistical techniques.

39 citations


Journal ArticleDOI
TL;DR: A unified approach is presented for the construction of sets of orthonormal exponential functions whose elements have the properties such that: 1) their asymptotic order may be chosen arbitrarily, and 2) their poles may be real, complex, or some real and others complex.
Abstract: A unified approach is presented for the construction of sets of orthonormal exponential functions whose elements have the properties such that: 1) their asymptotic order may be chosen arbitrarily, and 2) their poles may be real, complex, or some real and others complex. The main results are summarized as theorems and propositions in which the sets of exponentials are derived as transfer functions in the s domain. These theorems and propositions supplement the more conventional Gram-Schmidt procedure which is useful for the orthonormalization of functions in the time domain. Examples are given which illustrate applications of the main results. In addition, a generalized spectrum analyzer, which can be synthesized on an analog computer, is developed for use in the automatic evaluation of Fourier coefficients.

28 citations


Journal ArticleDOI
TL;DR: A decomposition technique for interacting linear dynamic systems is shown to lead to an optimum 2-level technique having equivalent performance.
Abstract: General approaches to the design of hierarchical systems are indicated which are particularly relevant to problems of optimal control of discrete systems. A decomposition technique for interacting linear dynamic systems is shown to lead to an optimum 2-level technique having equivalent performance. Comments on computational algorithms for realizing this technique are included.

27 citations




Journal ArticleDOI
C. K. Chow1, C. N. Liu1
TL;DR: In this article, a variable-structure system for pattern recognition is proposed, in which the recognition network initially assumes a linear structure, a set of relations among the input measurements is generated and selected on the basis of a subset of design data, and to accomodate these relations, the structure changes to nonlinear.
Abstract: Viewing pattern recognition as a problem in statistical classification wherein an n-dimensional hypercube is partitioned into category regions with decision boundaries, this paper focuses on a class of nonlinear boundary forms and describes a variable-structure system which is capable of evolving these boundaries adaptively. Most of the present adaptive recognition systems have a priori fixed structure, usually corresponding to a linear decision procedure, and adaptation is performed by parameter optimization. It is apparent that in practice the simple linear system frequently will be an inadequate approximation to the desired boundaries. The central problem in a more general recognition procedure is the selection and analysis of suitable nonlinear relations among measurements. In the system proposed in this paper, the structure adaptation develops as follows: the recognition network initially assumes a linear structure, a set of relations among the input measurements is generated and selected on the basis of a set of design data, and to accomodate these relations, the structure changes to nonlinear. By repetitively generating and selecting additional relations among the measurements, the structure gradually adjusts itself within the class of allowable structures toward an optimal configuration. By means of simulation on a digital computer, the procedure for structure adaptation was successfully applied to a number of practical problems, handwritten numerals, spoken vowels, and electroencephalograms.

20 citations




Journal ArticleDOI
Roy E. Lave1
TL;DR: A Markov Control Chain is developed which allows optimization of the timing of control activities and, for sample-based systems, selection of the length of sampled history upon which to base the decision to exercise control.
Abstract: A Markov Control Chain is developed which allows optimization of the timing of control activities and, for sample-based systems, selection of the length of sampled history upon which to base the decision to exercise control. The optimization is performed by the methods of policy iteration or linear programming and achieves minimization of the cost per unit-time of 1) the cost of output quality, 2) the sampling cost, and 3) the cost of exercising control. The class of processes to be controlled are assumed to shift from higher to lower quality levels according to a discrete or a discretely approximated continuous probability law. The shift is irreversible unless outside influence, called corrective action, is exercised; it may be time dependent when the process is said to have an aging failure characteristic. The control system studied is a sampling plan which bases the decision of whether or not to take corrective action on a sampled history of fixed maximum duration. This plan yields an nth order Markov chain which is converted to a first-order chain by state definition. The transition probabilities are Bayesian estimates based on a geometric prior probability distribution and a multinomial sample probability distribution. The process and system taken together represent what has been called Dynamic Inference [1].


Journal ArticleDOI
William Miller1
TL;DR: The logic and discipline of critical task network planning are being used to assure profitable on-schedule implementation of automation systems that fully utilize the potentialities of both the present day digital computers and the current engineering technology.
Abstract: The American steel industry is constantly facing challenges presented by new competitive materials, rising foreign imports, increasing labor costs, and new more complex technology. Steel industry customers are demanding and receiving tighter tolerances on their steel strip, sheet, and plate products. Steel is an ancient industry, when compared to today's space industry. Simple, cheap solutions are pretty well exhausted. Plant processes are both extremely expensive and productive. While change has been a way of life for years in the steel industry, the opportunities for profitable change today are fantastic compared with five years ago, due mostly to the digital computers. The self-regulation and tighter control achievable with automatic feedback, in addition to the unifying concepts of systems engineering, provide a proved technical approach to the solution of today's steel plant manufacturing and production control problems. The logic and discipline of critical task network planning are being used to assure profitable on-schedule implementation of automation systems that fully utilize the potentialities of both the present day digital computers and the current engineering technology.




Journal ArticleDOI
TL;DR: The major focus of interest during Problem Formulation is to reinterpret the goal oriented description of the system concept into a consideration of the practical obstacles to realization of thesystem concept.
Abstract: Problem Formulation begins after a general, conceptual, intuitive, yet concise statement of the system project goals and boundary conditions has been made. The object of this step of a system study is to formalize the system concept, reduce its ambiguity, and derive a family of subprojects which can be subjected to engineering solutions. The major focus of interest during Problem Formulation is to reinterpret the goal oriented description of the system concept into a consideration of the practical obstacles to realization of the system concept. The obstacles include environmental and operational factors as well as random failures in the system itself. To achieve the reinterpretation, a study of the basic concepts of system and state suggest the methodological framework.

Journal ArticleDOI
TL;DR: This paper begins with a general discussion of engineering decision making, works its way into contemporary value theories and their potential usefulness in engineering, and goes on to comment on the aspects and prospects of a unified theory of value useful for engineering.
Abstract: This paper begins with a general discussion of engineering decision making, works its way into contemporary value theories and their potential usefulness in engineering, and goes on to comment on the aspects and prospects of a unified theory of value useful for engineering. It is concluded that the present prospects of a unified theory of value are rather dim if one demands broad flexibility, i.e., applicability and adaptability, over a wide range of decision situations. As an alternative to a unified theory the possibility of a useful metatheory, serving to guide the choice of a particular theory of value in each decision situation, is considered.

Journal ArticleDOI
TL;DR: An attempt is made to apply concepts from biology to the practice of systems engineering, utilizing an intellectual framework based on the concepts of speciation and competition between species.
Abstract: One of the main lines of thought in General Systems is the transfer of system concepts from one field to another. There has been fruitful application of such theoretical concepts from engineering to other fields such as biology. In this paper, an attempt is made to apply concepts from biology to the practice of systems engineering. The relation of systems engineering to other fields is discussed, utilizing an intellectual framework based on the concepts of speciation and competition between species. The internal social structure of the profession and of individual organizations is considered, using the concept of competition within a species. Examples are drawn from biology to illustrate the points at issue.