scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Systems Science and Cybernetics in 1968"


Journal Article•DOI•
TL;DR: How heuristic information from the problem domain can be incorporated into a formal mathematical theory of graph searching is described and an optimality property of a class of search strategies is demonstrated.
Abstract: Although the problem of determining the minimum cost path through a graph arises naturally in a number of interesting applications, there has been no underlying theory to guide the development of efficient search procedures. Moreover, there is no adequate conceptual framework within which the various ad hoc search strategies proposed to date can be compared. This paper describes how heuristic information from the problem domain can be incorporated into a formal mathematical theory of graph searching and demonstrates an optimality property of a class of search strategies.

10,366 citations


Journal Article•
TL;DR: It is shown that in many problems, including some of the most important in practice, this ambiguity can be removed by applying methods of group theoretical reasoning which have long been used in theoretical physics.
Abstract: In decision theory, mathematical analysis shows that once the sampling distribution, loss function, and sample are specified, the only remaining basis for a choice among different admissible decisions lies in the prior probabilities. Therefore, the logical foundations of decision theory cannot be put in fully satisfactory form until the old problem of arbitrariness (sometimes called "subjectiveness") in assigning prior probabilities is resolved. The principle of maximum entropy represents one step in this direction. Its use is illustrated, and a correspondence property between maximum-entropy probabilities and frequencies is demonstrated. The consistency of this principle with the principles of conventional "direct probability" analysis is illustrated by showing that many known results may be derived by either method. However, an ambiguity remains in setting up a prior on a continuous parameter space because the results lack invariance under a change of parameters; thus a further principle is needed. It is shown that in many problems, including some of the most important in practice, this ambiguity can be removed by applying methods of group theoretical reasoning which have long been used in theoretical physics. By finding the group of transformations on the parameter space which convert the problem into an equivalent one, a basic desideratum of consistency can be stated in the form of functional equations which impose conditions on, and in some cases fully determine, an "invariant measure" on the parameter space.

1,366 citations


Journal Article•DOI•
Ronald A. Howard1•
TL;DR: Decision analysis has emerged from theory to practice to form a discipline for balancing the many factors that bear upon a decision as discussed by the authors, which can be visualized in a graphical problem space.
Abstract: Decision analysis has emerged from theory to practice to form a discipline for balancing the many factors that bear upon a decision. Unusual features of the discipline are the treatment of uncertainty through subjective probability and of attitude toward risk through utility theory. Capturing the structure of problem relationships occupies a central position; the process can be visualized in a graphical problem space. These features are combined with other preference measures to produce a useful conceptual model for analyzing decisions, the decision analysis cycle. In its three phases?deterministic, probabilistic, and informational?the cycle progressively determines the importance of variables in deterministic, probabilistic, and economic environments. The ability to assign an economic value to the complete or partial elimination of uncertainty through experimentation is a particularly important characteristic. Recent applications in business and government indicate that the increased logical scope afforded by decision analysis offers new opportunities for rationality to those who wish it.

370 citations


Journal Article•DOI•
TL;DR: A Probabilistic Information Processing System uses men and machines in a novel way to perform diagnostic information processing that circumvents human conservatism in information processing and fragments the job of evaluating diagnostic information into small separable tasks.
Abstract: A Probabilistic Information Processing System (PIP) uses men and machines in a novel way to perform diagnostic information processing. Men estimate likelihood ratios for each datum and each pair of hypotheses under consideration or a sufficient subset of these pairs. A computer aggregates these estimates by means of Bayes' theorem of probability theory into a posterior distribution that reflects the impact of all available data on all hypotheses being considered. Such a system circumvents human conservatism in information processing, the inability of men to aggregate information in such a way as to modify their opinions as much as the available data justify. It also fragments the job of evaluating diagnostic information into small separable tasks. The posterior distributions that are a PIP's output may be used as a guide to human decision making or may be combined with a payoff matrix to make decisions by means of the principle of maximizing expected value. A large simulation-type experiment compared a PIP with three other information processing systems in a simulated strategic war setting of the 1970's. The difference between PIP and its competitors was that in PIP the information was aggregated by computer, while in the other three systems, the operators aggregated the information in their heads. PIP processed the information dramatically more efficiently than did any competitor. Data that would lead PIP to give 99:1 odds in favor of a hypothesis led the next best system to give 4?: 1 odds.

144 citations


Journal Article•DOI•
D. Warner North1•
TL;DR: The intent of this paper is to provide a tutorial introduction to decision theory, and a simple example, the "anniversary problem," is used to illustrate decision theory.
Abstract: Decision theory provides a rational framework for choosing between alternative courses of action when the consequences resulting from this choice are imperfectly known. Two streams of thought serve as the foundations: utility theory and the inductive use of probability theory. The intent of this paper is to provide a tutorial introduction to this increasingly important area of systems science. The foundations are developed on an axiomatic basis, and a simple example, the "anniversary problem," is used to illustrate decision theory. The concept of the value of information is developed and demonstrated. At times mathematical rigor has been subordinated to provide a clear and readily accessible exposition of the fundamental assumptions and concepts of decision theory. A sampling of the many elegant and rigorous treatments of decision theory is provided among the references.

131 citations


Journal Article•DOI•
TL;DR: An adaptive approach is presented for optimal estimation of a sampled stochastic process with finite-state unknown parameters and conditions are given under which a Bayes optimal (conditional mean) adaptive estimation system will converge in performance to an optimal system which is "told" the value of unknown parameters.
Abstract: An adaptive approach is presented for optimal estimation of a sampled stochastic process with finite-state unknown parameters. It is shown that for processes with an implicit generalized Markov property that the optimal (conditional mean) state estimates can be formed from (i) a set of optimal estimates based on known parameters, and (ii) a set of "learning" statistics which are recursively updated. The formulation thus provides a separation technique which simplifies the optimal solution of this class of nonlinear estimation problems. Examples of the separation technique are given for prediction of a non-Gaussian Markov process with unknown parameters and for filtering the state of a Gauss-Markov process with unknown parameters. General results are given on the convergence of optimal estimation systems operating in the presence of unknown parameters. Conditions are given under which a Bayes optimal (conditional mean) adaptive estimation system will converge in performance to an optimal system which is "told" the value of unknown parameters.

95 citations


Journal Article•DOI•
TL;DR: Application to two classes of problems is given, one whose constraints describe a system of coupled subsystems; the second is a class of multi-item inventory problems whose decision variables may be discrete.
Abstract: The problem considered is that of obtaining solutions to large nonlinear mathematical programs by coordinated solution of smaller subproblems. If all functions in the original problem are additively separable, this can be done by finding a saddle point for the associated Lagrangian function. Coordination is then accomplished by shadow prices, with these prices chosen to solve a dual program. Characteristics of the dual program are investigated, and an algorithm is proposed in which subproblems are solved for given shadow prices. These solutions provide the value and gradient of the dual function, and this information is used to update the shadow prices so that the dual problem is brought closer to solution. Application to two classes of problems is given. The first class is one whose constraints describe a system of coupled subsystems; the second is a class of multi-item inventory problems whose decision variables may be discrete.

92 citations


Journal Article•DOI•
Carl Spetzler1•
TL;DR: This paper describes how a corporate utility function was evolved as a risk policy for capital investment decisions by interviewing corporate executives and developing a mathematical function that could reflect each interviewee's attitude.
Abstract: A corporate utility function plays a key role in the application of decision theory. This paper describes how such a utility function was evolved as a risk policy for capital investment decisions. First, 36 corporate executives were interviewed and their risk attitudes were quantified. From the responses of the interviewees, a mathematical function was developed that could reflect each interviewee's attitude. The fit of the function was tested by checking the reaction of the interviewees to adjusted responses. The functional form that led the interviewees to prefer the adjusted responses to their initial responses was finally accepted. The mathematical form of the function was considered a flexible pattern for a risk policy. The assumption was made that the corporate risk policy would be of this pattern. With the pattern for a risk policy set, it was possible to simplify the method of deriving a particular individual's risk attitude. Using the simplified method, the corporate policy makers were interviewed once more. The results from these interviews were then used as a starting point in two negotiation sessions. As a result of these negotiation sessions, the policy makers agreed on a risk policy for trial purposes. They also agreed to develop a number of major projects using the concepts of risk analysis and the certainty equivalent.

78 citations


Journal Article•DOI•
TL;DR: Transient and frequency response experiments indicate that the fusional vergence system is not characterized by sampled data or refractory operation, but this system may utilize prediction to reduce inherent phase lags when provided with a periodic input.
Abstract: Experiments have been performed on the fusional vergence eye-movement mechanism in humans to provide a comparison with the established dynamical characteristics of the versional eye-movement system. Transient and frequency response experiments indicate that the fusional vergence system is not characterized by sampled data or refractory operation. When provided with a periodic input, this system may utilize prediction to reduce inherent phase lags. The gain of the system, although apparently unaffected by the predictive mechanism, is subject to input amplitude-dependent nonlinearities. Under conditions of artificially high loop gain, the system breaks into smooth sustained oscillations at a frequency predicted by frequency response data. The absence of a refractory period in the fusional vergence system is demonstrated by the system response to brief pulsatile stimulation. These results are discussed, emphasizing comparison with corresponding results from experiments on the versional system.

76 citations


Journal Article•DOI•
TL;DR: Several decision algorithms were used to classify complex patterns recorded by TV cameras aboard unmanned, scientific satellites, and these accuracies ranged from 53 percent to 99 percent on independent data.
Abstract: Several decision algorithms were used to classify complex patterns recorded by TV cameras aboard unmanned, scientific satellites. Recognition experiments were performed with two kinds of patterns: lunar topographic features and clouds in the earth's atmosphere. Classification accuracies ranged from 53 percent to 99 percent on independent data.

62 citations


Journal Article•DOI•
TL;DR: The efficiency of learning for an m-state automaton in terms of expediency and convergence, under two distinct types of reinforcement schemes: one based on penalty probabilities and the other on penalty strengths, is discussed.
Abstract: A stochastic automaton responds to the penalties from a random environment through a reinforcement scheme by changing its state probability distribution in such a way as to reduce the average penalty received. In this manner the automaton is said to possess a variable structure and the ability to learn. This paper discusses the efficiency of learning for an m-state automaton in terms of expediency and convergence, under two distinct types of reinforcement schemes: one based on penalty probabilities and the other on penalty strengths. The functional relationship between the successive probabilities in the reinforcement scheme may be either linear or nonlinear. The stability of the asymptotic expected values of the state probability is discussed in detail. The conditions for optimal and expedient behavior of the automaton are derived. Reduction of the probability of suboptimal performance by adopting the Beta model of the mathematical learning theory is discussed. Convergence is discussed in the light of variance analysis. The initial learning rate is used as a measure of the overall convergence rate. Learning curves can be obtained by solving nonlinear difference equations relating the successive expected values. An analytic expression concerning the convergence behavior of the linear case is derived. It is shown that by a suitable choice of the reinforcement scheme it is possible to increase the separation of asymptotic state probabilities.

Journal Article•DOI•
Raimo Bakis1, Noel M. Herbst1, George Nagy1•
TL;DR: The recognition of hand-printed numerals is studied on a broad experimental basis within the constraints imposed by a raster scanner generating binary video patterns, a mixed measurement set, and a statistical decision function.
Abstract: The recognition of hand-printed numerals is studied on a broad experimental basis within the constraints imposed by a raster scanner generating binary video patterns, a mixed measurement set, and a statistical decision function. A computer-controlled scanner is used to acquire the characters, to adjust the raster resolution and registration, and to monitor the black-white threshold of the quantizer. The dimensionality of the decision problem is reduced by a hybrid system of measurements. In the measurement design, three types of measurements are generated: a set of "topological" measurements, a set of logical "n-tuples," both designed by hand, and a large set of n-tuples machine generated at random under special constraints. The final set of 100 measurements is selected automatically by a programmed algorithm that attempts to minimize the maximum expected error rate between every character pair. Computer simulation experiments show the effectiveness of the selection procedure, the contribution of the different types of measurements, the effect of the number of measurements selected on recognition, and the desirability of size and shear normalization. The final system is tested on four data sets printed under different degrees of control on the writers. Each data set consists of approximately 10 000 characters. For this comparison, a first-order maximum likelihood function with weights quantized to 100 levels is used. Error versus reject curves are given on several combinations of training and test sets.

Journal Article•DOI•
TL;DR: The problem is introduced in some detail, and the concepts involved reviewed, and it turns out in this case that all the restrictions are linear in certain quantities, so that the existence problem is essentially one of satisfying linear constraints.
Abstract: When a decision maker is assessing a preference (utility) function for assets (wealth), it is natural for him to start by making some quantitative assessments of the certainty equivalents of a few simple gambles and some qualitative statements specifying any regions in which he feels risk-averse or risk-seeking and any regions in which he feels decreasingly or increasingly risk-averse or risk-seeking. Several questions then arise. Does any preference function exist which satisfies all the quantitative and qualitative restrictions simultaneously, that is, are the restrictions consistent? If so, how far do they determine the preference function? How might one fair a "smooth" function satisfying the restrictions? This paper is addressed to these questions. First the problem is introduced in some detail, and the concepts involved reviewed. Then the case is considered where the qualitative restrictions only specify regions of risk-aversion or risk-seeking. It turns out in this case that all the restrictions are linear in certain quantities, so that the existence problem is essentially one of satisfying linear constraints. Furthermore, finding the maximum or minimum solution at a specified point is exactly a linear programming problem. Also discussed briefly are the possibility that some smoothing problems might simply introduce a nonlinear objective function (though the general smoothing problem is more complicated) and the problem of making the derivative of the preference function continuous (which is not always possible). If regions of increasing or decreasing risk-aversion are also given, the problem becomes much more difficult.

Journal Article•DOI•
James E. Matheson1•
TL;DR: This paper shows how the decision analysis approach can be used to determine the most economic method of carrying out computations or analyses, by combining the value structure of the primary decision problem with a model of that procedure.
Abstract: This paper shows how the decision analysis approach can be used to determine the most economic method of carrying out computations or analyses. A primary decision problem is first formulated to obtain a structure for the analysis. Then several computational or analytical procedures, which can be used to analyze the primary decision problem in greater detail, are evaluated to select the most economic procedure. The purpose of each of these procedures is to increase the available information about uncertain parameters before making the primary decision, thereby yielding a "better" decision. Each procedure is evaluated by combining the value structure of the primary decision problem with a model of that procedure. The procedures considered in this paper are clairvoyance, complete analysis, Monte Carlo analysis, and numerical analysis. An example of a bidding problem is used to illustrate the results.

Journal Article•DOI•
TL;DR: The decision faced by a physician when confronted by a patient with an undetermined disease may be simply stated as: "What course of action, in the form of diagnostic tests and/or treatments, should be taken?"
Abstract: The decision faced by a physician when confronted by a patient with an undetermined disease may be simply stated as: "What course of action, in the form of diagnostic tests and/or treatments, should be taken?" In most cases, this problem can be characterized as a sequential decision under uncertainty. Since this is a class of problems for which decision theory has proved a useful tool, it appears fruitful to attempt to apply it to the physician's problem. In this paper, this possibility is explored by describing the application of decision theoretic techniques to a specific case. We first comment on why we believe the proposed model is more appropriate than other methods of treating the problem. Then the proposed model is briefly described in the abstract. The main body of the paper describes a specific problem and its solution by decision theoretic techniques. In the final section, some of the shortcomings of the particular analysis and some of the problems that might be encountered in a more general setting are pointed out.

Journal Article•DOI•
TL;DR: Lagrangian methods as used previously by Lasdon and Pearson are shown to be a particular case of parametric optimization, and the range of their applicability is specified.
Abstract: This paper deals with optimal control in multilevel systems. The decomposition of a system into N subsystems is presented as a problem of formulating the performance index P(m) as a function of N components P(P1, P2,..., PN) and of transforming the system constraint m?R into a set of constraints m1?IR1(v), m2?R2(v),..., v?Rv, where v is the coordination variable. Ways of achieving this goal as applicable to typical systems are presented. Some aspects of choosing the coordination variable and the tradeoffs involved are discussed. Lagrangian methods as used previously by Lasdon and Pearson are shown to be a particular case of parametric optimization, and the range of their applicability is specified. Simple examples of static optimization serve to illustrate the approach.

Journal Article•DOI•
Rita Zemach1•
TL;DR: The paper discusses the question of control inputs and the feasibility of developing a formal optimal control policy for a university with essentially "open door" admissions.
Abstract: A state-space model describes the behavioral characteristics of a system as a set of relationships among time functions representing its inputs, outputs, and internal state. The model presented describes the utilization of a university's basic resources of personnel, space, and technological equipment in the production of degree programs, research, and public or technical services. It is intended as an aid in achieving an optimal allocation of resources in higher education and in predicting future needs. The internal state of the system is defined as the distribution of students into levels and fields of study, with associated unit "costs" of education received. The model is developed by interconnecting, with appropriate constraints, independent submodels of major functional segments of university activity. The development of computer programs for estimation of parameters with continual updating and for simulation of the system behavior is described. This description includes a review of machine-addressable data files needed to implement the programs. The state model provides a natural form for approaching problems of system optimization and control. The paper discusses the question of control inputs and the feasibility of developing a formal optimal control policy for a university with essentially "open door" admissions.

Journal Article•DOI•
TL;DR: A method is presented for selecting a subset of features from a specified set when economic considerations prevent utilization of the complete set, and the formulation of the feature selection problem as a dynamic programming problem permits an optimal solution to feature selection problems which previously were uncomputable.
Abstract: A method is presented for selecting a subset of features from a specified set when economic considerations prevent utilization of the complete set. The formulation of the feature selection problem as a dynamic programming problem permits an optimal solution to feature selection problems which previously were uncomputable. Although optimality is defined in terms of a particular measure, the Fisher return function, other criteria may be substituted as appropriate to the problem at hand. This mathematical model permits the study of interactions among processing time, cost, and probability of correctly classifying patterns, thus illustrating the advantages of dynamic programming. The natural limitation of the model is that the only features which can be selected are those supplied by its designer. Conceptually, the dynamic programming approach can be extended to problems in which several constraints limit the selection of features, but the computational difficulties become dominant as the number of constraints grows beyond two or three.

Journal Article•DOI•
TL;DR: This research investigates a technique for machine learning that would be useful in solving problems involving forcing states in games or control problems, and enables the learning program to generalize from one example of a forcing state to all other configurations that are strategically equivalent.
Abstract: The objective of this research was to investigate a technique for machine learning that would be useful in solving problems involving forcing states In games or control problems a forcing state is one from which the final goal can always be reached, regardless of what disturbances may arise A program that learns forcing states in a class of games (in a game-independent format) by working backwards from a previous loss has been written The class of positions that ultimately results in the opponent's win is learned by the program (using a specially designed description language) and stored in its memory together with the correct move to be made when this pattern reoccurs These patterns are searched for during future plays of the game If they are formed by the opponent, the learning program blocks them before the opponent's win sequence can begin If it forms the patterns first, the learning program initiates the win sequence The class of games for which the program is effective includes Qubic, Go-Moku, Hex, and the Shannon network games, including Bridge-it The description language enables the learning program to generalize from one example of a forcing state to all other configurations that are strategically equivalent

Journal Article•DOI•
TL;DR: The theory for limited memory multistage decision processes is presented and results indicate that the 3-bit memory is, for practical purposes, equivalent to a full memory decision maker.
Abstract: Sequential decision models have heretofore assumed a full memory decision maker That is, the model is permitted to retain, to any degree of precision, all information needed to optimize decision performance This information may include functions or variables that change with observations and thus often implies a decision maker which possesses a large amount of soft (erasable) memory In simple multistage decision problems soft memory can be reduced to two variables?the log-odds ratio L and the available number of observations n The log-odds ratio is a quantitative measure of the decision maker's opinion of the cause of the observed variate This paper examines the effect of limiting the decision maker's soft memory by specifying an m-bit register for the random variable L The theory for limited memory multistage decision processes is presented in which there are two simple hypotheses Numerical results indicate that the 3-bit memory is, for practical purposes, equivalent to a full memory decision maker

Journal Article•DOI•
Myron Tribus1, Gary Fitts1•
TL;DR: The Jaynes "widget problem" is reviewed as an example of an application of the principle of maximum entropy in the making of decisions, where the exact solution yields an unusual probability distribution.
Abstract: The Jaynes "widget problem" is reviewed as an example of an application of the principle of maximum entropy in the making of decisions. The exact solution yields an unusual probability distribution. The problem illustrates why some kinds of decisions can be made intuitively and accurately, but would be difficult to rationalize without the principle of maximum entropy.

Journal Article•DOI•
TL;DR: A conceptual framework within which several alternate model forms for a particular process can be considered simultaneously, with the primary emphasis on the setting of a vector of control variables rather than the selection of a model per se is described.
Abstract: This paper describes a conceptual framework within which several alternate model forms for a particular process can be considered simultaneously. The development is in decision theoretic terms with the primary emphasis on the setting of a vector of control variables rather than the selection of a model per se. The argument centers about the role of observed data in altering the state of information about the appropriate model form and its parameters. The basic ideas are illustrated by means of a simple example of modeling a binary source.

Journal Article•DOI•
TL;DR: In this paper, a direct-search computer program using a heuristic approach is described, making an attempt to extract feature informations automatically from patterns which may consist of open lines, partially overlapping cells, and cells that may lie entirely inside another cell.
Abstract: An attempt is made to extract feature informations automatically from patterns which may consist of open lines, partially overlapping cells, and cells that may lie entirely inside another cell. The usual pattern-recognition techniques, such as the linear threshold logic technique and the masking or template technique, are not practical here, if not entirely impossible. In this paper, a direct-search computer program using a heuristic approach is described. A test pattern is used to illustrate the capability of the program. The subject should be of general interest to those in the field of automation and cybernetics.

Journal Article•DOI•
TL;DR: The process of determining what should be in an electronic package or on an LSI slice is considered a partitioning problem with a set of constraints and an approach to system decomposition and partitioning is established from the viewpoint of minimizing external signal lines by minimizing the number of pins per set of packages.
Abstract: The process of determining what should be in an electronic package or on an LSI slice is considered a partitioning problem with a set of constraints. An approach to system decomposition and partitioning is established from the viewpoint of minimizing external signal lines by minimizing the number of pins per set of packages, which will improve reliability and performance and reduce cost. Applications of combinatorial analysis and signal graph theory to partition a group of circuits are illustrated. System partitioning levels are identified as complexity levels. A partitioning procedure is given for use at any complexity level, and equations are provided for counting the possible number of partitions. A combinatorial technique is developed to count the number of pins per package. System integration is considered here to be the process of converting interconnections of circuits and elements to intraconnections of function blocks. An integration factor concept, based on the number of signal lines and required pins per package, is introduced. The integration factor is used to ascertain tradeoffs and compromises of possible partitions and integration techniques. The rules given provide an analytic method for solving some of the problems of system design.

Journal Article•DOI•
E. Gerald Hurst1•
TL;DR: Two Bayesian autoregressive time series models for partially observable dynamic processes are presented and the facility for simultaneously inferring an unknown and unchanging parameter of the time series is added.
Abstract: Two Bayesian autoregressive time series models for partially observable dynamic processes are presented. In the first model, a general inference procedure is developed for the situation in which k previous values of the time series plus a change error determine the next value. This general model is specialized to an example in which the observational and change errors follow a normal probability law; the results for k = 1 are given and discussed. The second general model adds the facility for simultaneously inferring an unknown and unchanging parameter of the time series. This model is specialized to the same normal example presented earlier, with the precision of the change error as the unknown process parameter.

Journal Article•DOI•
TL;DR: It is shown that the decision maker can describe a preference ordering of this kind by stating that he is exposed to a risk, represented by a stochastic process, and that his objective is to find the decision which will minimize the probability of his ruin.
Abstract: This paper surveys some classical decision problems with and without uncertainty. From the survey, it is concluded that the natural generalization of these problems leads to the problem of describing preference orderings over sets of stochastic processes. It is shown that the decision maker can describe a preference ordering of this kind by stating that he is exposed to a risk, represented by a stochastic process, and that his objective is to find the decision which will minimize the probability of his ruin. If this probability is equal to one, the natural objective is to maximize the expected time before ruin occurs.

Journal Article•DOI•
TL;DR: The model is constructed to perform four modes of control: 1) probing, 2) gradient, 3) heuristic, and 4) terminal, which simulates the function of a human operator in a control system and the evolution of heuristics for control.
Abstract: This paper presents a mathematical model for decision making in control systems. The model is constructed to perform four modes of control: 1) probing, 2) gradient, 3) heuristic, and 4) terminal. The operation of the model switches from one mode to another by following certain decision logic, which simulates the function of a human operator in a control system and the evolution of heuristics for control. The simulation results compare favorably with the data obtained from experiments with subjects.

Journal Article•DOI•
TL;DR: The present paper summarizes the results of a subsequent analytical investigation which included a digital simulation of the Wiener's Hermite?Laguerre expansion procedure, focusing on the aspects of realizability, convergence, and applicability of the method with regard to 1) the classes of stochastic inputs for which the procedure is valid, and 2) the parameters of those processes.
Abstract: Previous papers have proposed the application of Wiener's Hermite?Laguerre expansion procedure to the multiple-alternative, discrete-decision problem with learning, characteristic of many waveform or stochastic-process pattern-recognition problems. Both sequential and nonsequential procedures were formulated; the resulting models are functionally analogous to a generalized Bayes'-net type of pattern recognizer or decision maker for stochastic processes, differing from usual Bayes' nets in their actual mathematical or circuit configurations and size-determining factors. It is to be noted that for ergodic processes (or approximations thereto), the procedure, if it can be applied, is nonparametric, i.e., not dependent upon prior or explicit knowledge of the form of the probability distribution governing the behavior of the stochastic process. The applicability of the resulting system to problems in cybernetics, intelligence, and learning was discussed previously. The present paper summarizes the results of a subsequent analytical investigation which included a digital simulation of the procedure. Emphasis is on the aspects of realizability, convergence, and applicability of the method with regard to 1) the classes of stochastic inputs for which the procedure is valid, and 2) the parameters of those processes. A Wiener or Wiener-derived white-noise process is used as the bench mark process here. Based on the results of the analysis, the introduction of certain preprocessors to extend the applicability of the procedure are suggested.

Journal Article•DOI•
TL;DR: The utility of the normed-space formulation is illustrated by some simple applications and some extensions of classical convex programming to problems formulated on an abstract normed space.
Abstract: This paper describes some extensions of classical convex programming to problems formulated on an abstract normed space. The utility of the normed-space formulation is illustrated by some simple applications.