scispace - formally typeset
Search or ask a question

Showing papers on "Generalization published in 1979"





Journal ArticleDOI
TL;DR: In this paper, a generalization of von Neumann-Morgenstern utility of the vector of rewards is proposed, where an individual's preferences concerning the timing of the resolution of uncertainty are taken into account.
Abstract: Finite horizon sequential decision problems with a "temporal von NeumannMorgenstern utility" criterion are analyzed This criterion, as developed in [7], is a generalization of von Neumann-Morgenstern (expected) utility of the vector of rewards, wherein an individual's preferences concerning the timing of the resolution of uncertainty are taken into account The preference theory underlying this criterion is reviewed and then extended in natural fashion to yield preferences for strategies in sequential decision problems The main result is that value functions for sequential decision problems can be defined by a dynamic programming recursion using the functions which represent the original preferences, and these value functions represent the preferences defined on strategies This permits citation of standard results from the dynamic programming literature, concerning the existence of (memoryless) strategies which are optimal with respect to the given preference relation

110 citations


Journal ArticleDOI
TL;DR: It is demonstrated and proved an extension to all even moduli, and a theorem which holds for all moduli is provided, which is based on the Rogers-Ramanujan identities.

106 citations


Journal ArticleDOI
TL;DR: In this article, a generalization of Carleson measure to the bi-disc was discussed, and a class of such measures induced by bi-harmonic functions was discussed. But the generalization was restricted to the case where the objective function is fixed.
Abstract: In this paper we discuss a possible generalization of Carleson measure to the bi-disc, and indicate a class of such measures induced by bi-harmonic functions. Let D denote the open unit disc {z I I z I 1, let u(re") = (Pr *f)(0), where P7(0) = (1-r2)/(1-2r cos 0 + r2) is the Poisson kernel. The following theorem is one of the main results in [1].

104 citations


Journal ArticleDOI
TL;DR: The principal aim of this paper is to show that the notion of forking is an easy and natural notion that is no less important for stable theories than other notions of dependence.
Abstract: The notion of forking has been introduced by Shelah, and a full treatment of it will appear in his book on stability [S1]. The principal aim of this paper is to show that it is an easy and natural notion. Consider some well-known examples of ℵ 0 -stable theories: vector spaces over Q , algebraically closed fields, differentially closed fields of characteristic 0; in each of these cases, we have a natural notion of independence: linear, algebraic and differential independence respectively. Forking gives a generalization of these notions. More precisely, if are subsets of some model and c a point of this model, the fact that the type of c over does not fork over means that there are no more relations of dependence between c and than there already existed between c and . In the case of the vector spaces, this means that c is in the space generated by only if it is already in the space generated by . In the case of differentially closed fields, this means that the minimal differential equations of c with coefficient respectively in and have the same order. Of course, these notions of dependence are essential for the study of the above mentioned structures. Forking is no less important for stable theories. A glance at Shelah's book will convince the reader that this is the case. What we have to do is the following. Assuming T stable and given and p a type on , we want to distinguish among the extensions of p to some of them that we shall call the nonforking extensions of p .

101 citations


Journal ArticleDOI
TL;DR: In this article, the Bayesian Steady Forecasting model is generalized to a wide class of processes other than the normal by defining the time series on the decision space, including a Beta-Binomial process, a Poisson-Gamma process and a Student-t sample distribution steady model.
Abstract: SUMMARY The Bayesian Steady Forecasting model is generalized to a very wide class of processes other than the normal by defining the time series on the decision space. Examples of such processes are presented including a Beta-Binomial process, a Poisson-Gamma process and a Student-t sample distribution steady model. Simple updating relations are given for most of the processes discussed.

98 citations


Journal ArticleDOI
TL;DR: In this paper, the structure of covariant measurements is described in the cases of finite-dimensional or irreducible representation of the symmetry group, and a noncommutative analogue of Hunt-Stein theorem in mathematical statistics is proved.

92 citations


Journal ArticleDOI
TL;DR: In this article, it is shown that for each finite set A = { a l,..., a,} in R e with n ~ d + 2, one can find a linear map f : Ra+I-*R a and a set A'= {a;...,, a~}cR a+l such that f ( a ; )=a l i = l, 2,..
Abstract: 1. The well-known theorem of RADON [3] says that if A c R a and IAl=>d+2, then there exist B, C c A , B ( ~ C = ~ such that conv BNconv C is not empty. i t is clear that for each finite set A = { a l , ..., a,} in R e with n ~ d + 2 one can find a linear map f : Ra+I-*R a and a set A ' = {a; . . . . , a~}cR a+l such that f ( a ; )=a l i = l , 2, . . . , n and int conv A' is not empty and vert conv A ' = A ' . In view of this fact, Radon ' s theorem can be stated in the following way.

89 citations


Journal ArticleDOI
TL;DR: Schubert's method for solving sparse nonlinear equations is an extension of Broyden's method The zero-nonzero structure defined by the sparse Jacobian is preserved by updating the approximate Jacobian row by row as discussed by the authors.
Abstract: Schubert’s method for solving sparse nonlinear equations is an extension of Broyden’s method The zero-nonzero structure defined by the sparse Jacobian is preserved by updating the approximate Jacobian row by row An estimate is presented which permits the extension of the convergence results for Broyden’s method to Schubert’s method The analysis for local and q-superlinear convergence given here includes, as a special case, results in a recent paper by B Lam; this generalization seems theoretically and computationally more satisfying A Kantorovich analysis paralleling one for Broyden’s method is given This leads to a convergence result for linear equations that includes another result by Lam A result by More and Trangenstein is extended to show that a modified Schubert’s method applied to linear equations is globally and q-superlinearly convergent

Journal ArticleDOI
TL;DR: It is found that the new SIRT methods converge faster than Gilbert's SIRT but are more sensitive to noise present in the data, so the faster convergence rates allow termination before the noise contribution degrades the reconstructed image excessively.

Journal ArticleDOI
01 Jan 1979
TL;DR: In this paper, Cartographic transformations are applied to locative geographic data and to substantive geographic data, and the theoretical importance of the inverses is in the study of error propagation effects.
Abstract: Cartographic transformations are applied to locative geographic data and to substantive geographic data. Conversion between locative aliases are between points, lines, and areas. Substantive transformations occur in map interpolation, filtering, and generalization, and in map reading. The theoretical importance of the inverses is in the study of error propagation effects.

Journal ArticleDOI
TL;DR: The paper summarizes regression analysis including generalized least squares which might be used for simulation responses with non-constant variances, and the validity of the postulated regression metamodel is tested statistically.

Journal ArticleDOI
TL;DR: In this paper, the Richardson extrapolation process is generalized to cover a large class of sequences and error bounds for the approximations are obtained and some convergence theorems for two different limiting processes are given.
Abstract: The Richardson extrapOlation process is generalized to cover a large class of sequences. Error bounds for the approximations are obtained and some convergence theorems for two different limiting processes are given. The results are illustrated by an oscillatory infinite integral.

Book ChapterDOI
01 Jan 1979
TL;DR: In this article, running M-estimates are a natural generalization of Kernel-type smoothers (moving averages) and the rate of convergence can be expected from these estimates and the leading bias and variance terms.
Abstract: In curve estimation, running M-estimates are a natural generalization of Kernel-type smoothers (moving averages). We find the rate of convergence that can be expected from these estimates and the leading bias and variance terms. We also explain the effect of twicing for Kernel-type smoothers and give some rationale for its use in robust curve estimation.


Journal ArticleDOI
TL;DR: In this paper, the authors investigated all type D solutions of the Einstein-Maxwell equations with cosmological constant such that the Debever-Penrose vectors are aligned along the two eigenvectors of the electromagnetic field, in the special case when a direct generalization of the Goldberg-Sachs theorem is not possible.
Abstract: We investigate all type D solutions of the Einstein–Maxwell equations (with cosmological constant) such that the Debever–Penrose vectors are aligned along the two eigenvectors of the electromagnetic field, in the special case when a direct generalization of the Goldberg–Sachs theorem is not possible. A solution is found which admits no Killing vectors. We also present an extension of the Golberg–Sachs theorem valid for type D metrics.

Proceedings Article
20 Aug 1979
TL;DR: The problem of concept learning, or forming a general description of a class of objects given a set of examples and non-examples, is viewed here as a search problem.
Abstract: The problem of concept learning, or forming a general description of a class of objects given a set of examples and non-examples, is viewed here as a search problem. Existing programs that generalize from examples are characterized in terms of the classes of search strategies that they employ. Several classes of search strategies are then analyzed and compared in terms of their relative capabilities and computational complexities.


Proceedings ArticleDOI
30 Apr 1979
TL;DR: A pebbling problem which has been used to study the storage requirements of various models of computation is examined and the original problem P-space complete is proved by employing a modification of Lingas's proof.
Abstract: We examine a pebbling problem which has been used to study the storage requirements of various models of computation. Sethi has shown this problem to be NP-hard and Lingas has shown a generalization to be P-space complete. We prove the original problem P-space complete by employing a modification of Lingas's proof. The pebbling problem is one of the few examples of a P-space complete problem not exhibiting any obvious quantifier alternation.

Journal ArticleDOI
TL;DR: The main result generalizes the theorem of Cesari Vincent according to which the period of a word is the maximum of the minimal repetitions, which allows a sharpened version of the solution to a problem settled by Schutzenberger.

Journal ArticleDOI
TL;DR: In this article, a generalization of the stochastic multilocation problem of inventory theory is considered and a qualititative analysis of the problem is presented and it is shown that optimal policies have a certain geometric form.
Abstract: This paper examines a convex programming problem that arises in several contexts. In particular, the formulation was motivated by a generalization of the stochastic multilocation problem of inventory theory. The formulation also subsumes some “active” models of stochastic programming. A qualititative analysis of the problem is presented and it is shown that optimal policies have a certain geometric form. Properties of the optimal policy and of the optimal value function are described.

Journal ArticleDOI
TL;DR: In this paper, a theoretical and experimental analysis of self-inducting turbo aerator performance was carried out based on the theory of water jet injector operation, and it was shown that the aerator's injection coefficient is completely determined by only two dimensionless groups, CH and EuG, between which there is a single valued dependence on whose basis generalization of experimental data is possible.
Abstract: A theoretical and experimental analysis of self-inducting turbo aerator performance was carried out based on the theory of water jet injector operation. It was shown that the aerator's injection coefficient is completely determined by only two dimensionless groups, CH and EuG, between which there is a single valued dependence on whose basis generalization of experimental data is possible. An analytical expression for the optimality criterion or aerator performance index, accounting for aerator gas capacity, power consumption, and submergence, has been obtained in terms of CH and EuG with only one adjustable parameter. From the above two dependencies, a methodology was developed for determining optimal operating conditions, by means of which all the data necessary for the optimal design can be obtained from the results of bench scale experiment.

Journal ArticleDOI
TL;DR: For the standard case where the observations are uncorrelated and have equal variance, the optimal moving averages generalize two well-known optimalMoving averages: The minimum-variance and the minimum-Rz moving averages.
Abstract: In this paper a new criterion for judging the properties of moving averages is given, and moving averages which are optimal according to this criterion under general assumptions are derived. For the standard case where the observations are uncorrelated and have equal variance, our optimal moving averages generalize two well-known optimal moving averages: The minimum-variance and the minimum-Rz moving averages. This case is given some particular attention in the theoretical discussion, and some Monte Carlo experiments throw further light on it. These investigations indicate that our generalization is of practical as well as theoretical interest. The paper also contains the result that Spencer's 21-term moving average is approximately equal to the corresponding minimum-R 5 moving average.

Journal ArticleDOI
TL;DR: Balinski and Young as mentioned in this paper developed a class of new methods for congressional apportionment by building upon and extending the recent work done by M. L. Balinski and H. P. Young.
Abstract: This paper develops a class of new methods for congressional apportionment by building upon and extending the recent work done by M. L. Balinski and H. P. Young. The class, which also includes the Balinski–Young Quota Method, consists of all apportionment methods satisfying two basic criteria (satisfying quota and house monotonicity) articulated by Balinski and Young. The class is defined for both the pure apportionment problem and the apportionment problem with minimum requirements. In the latter case, a new generalization of lower quota is presented, which differs from Balinski and Young’s generalization. Finally, reasons are given for rejecting the Balinski–Young Quota Method and preferring instead one of the new methods in the class.

Journal ArticleDOI
TL;DR: Two related experiments examined generalization across contexts involving different sentence forms when selected grammatical features were trained within a single syntactic context and suggested that training on certain forms of either verbal auxiliary or copula may be sufficient to generate correct production of both of them.
Abstract: Two related experiments examined generalization across contexts involving different sentence forms when selected grammatical features were trained within a single syntactic context. Results obtaine...

Journal ArticleDOI
TL;DR: A theorem-proving system has been programmed for automating mildly complex proofs by structural induction, which can cope with situations as complex as the defination and correctness proof of a simple compiling algorithm for expressions.

Journal ArticleDOI
TL;DR: A class of functions that can be synthesized from example problems that is the interpretation of a given scheme and one can compute the number of examples necessary to characterize in a unique way a function of this class.
Abstract: We define a class of functions that can be synthesized from example problems. The algorithmic representation of these functions is the interpretation of a given scheme. The instantiation of the scheme variables is realized by a new method which uses pattern matching then if necessary generalization and further pattern matching. One can compute the number of examples necessary to characterize in a unique way a function of this class.

Journal ArticleDOI
TL;DR: In this paper, a mathematical generalization of the concept of quantum spin is constructed in which the role of the symmetry groupO3 is replaced byOv (ν=2,3,4,...).
Abstract: A mathematical generalization of the concept of quantum spin is constructed in which the role of the symmetry groupO3 is replaced byOv (ν=2,3,4, ...). The notion of spin direction is replaced by a point on the manifold of oriented planes in ℝv. The theory of coherent states is developed, and it is shown that the natural generalizations of Lieb's formulae connecting quantum spins and classical configuration space hold true. This leads to the Lieb inequalities [1] and with it to the limit theorems as the quantum spinl approaches infinity. The critical step in the proofs is the validity of the appropriate generalization of the Wigner-Eckart theorem.