scispace - formally typeset
Search or ask a question

Showing papers on "Function (mathematics) published in 1992"


Journal Article
TL;DR: The proposed experimental plans are composed of individually randomized one-factor-at-a-time designs, and data analysis is based on the resulting random sample of observed elementary effects, those changes in an output due solely to changes in a particular input.
Abstract: A computational model is a representation of some physical or other system of interest, first expressed mathematically and then implemented in the form of a computer program; it may be viewed as a function of inputs that, when evaluated, produces outputs. Motivation for this article comes from computational models that are deterministic, complicated enough to make classical mathematical analysis impractical and that have a moderate-to-large number of inputs. The problem of designing computational experiments to determine which inputs have important effects on an output is considered. The proposed experimental plans are composed of individually randomized one-factor-at-a-time designs, and data analysis is based on the resulting random sample of observed elementary effects, those changes in an output due solely to changes in a particular input. Advantages of this approach include a lack of reliance on assumptions of relative sparsity of important inputs, monotonicity of outputs with respect to inputs, or ad...

3,396 citations


Journal ArticleDOI
TL;DR: A recursion-theoretic characterization of FP which describes polynomial time computation independently of any externally imposed resource bounds, and avoids the explicit size bounds on recursion of Cobham.
Abstract: We give a recursion-theoretic characterization of FP which describes polynomial time computation independently of any externally imposed resource bounds. In particular, this syntactic characterization avoids the explicit size bounds on recursion (and the initial function 2|x|·|y|) of Cobham.

461 citations


G. K. Watugala1
01 Jan 1992
TL;DR: u and F (u) are no longer dummies but can be treated as replicas of t and f (t) and can be expressed using same respective units, and therefore one can check the consistency of units of a differential equation even after the Sumudu transform.
Abstract: It is possible to solve differential equations, integral equations, and control engineering problems by a transformation in which the differentiation and integration of f(t) in the t-domain is made equivalent to division and multiplication of F(u) by u in the u-domain. The new transformation which is called the Sumudu transformation possesses many interesting properties which make the visualization of the transformation process easier to a newcomer. Some of the properties of the Sumudu transformation are: (1) The unit-step function in t-domain is transformed to unity in u-domain. (2) Scaling of f (t) in t-domain is equivalent to the scaling of F (u) by the same scale factor, and this is true even for negative scale factors. (3) The limit of f (t) as t tends to zero is equal to the limit of F (u) as u tends to zero. (4) The slope of f (t) at t=0 is equal to the slope of F (u) at u = 0. Thus u and F (u) are no longer dummies but can be treated as replicas of t and f (t) and can be expressed using same respective units, and therefore one can check the consistency of units of a differential equation even after the Sumudu transform.

440 citations


Journal ArticleDOI
01 Apr 1992
TL;DR: In this paper, the Hyers-Ulam stability theorem for approximately linear mappings was shown to not hold for p = 1, and it was shown that there exists a continuous real-valued function f: IR -* R satisfying If(x + y) f(x) f (x) (PA)? Ixl + IYI, Received by the editors July 3, 1990.
Abstract: We find an example to show when the Hyers-Ulam stability does not occur for approximately linear mappings. We also investigate the behavior of such mappings. Let E1, E2 be two real Banach spaces, and let f: E1 -* E2 be a mapping that is "approximately linear." S. M. Ulam posed the following problem: "Give conditions in order for a linear mapping near an approximately linear mapping to exist" [5, 6]. The solution to this problem has been given in the following way: Theorem 1. Consider E1, E2 to be two Banach spaces, and let f: E1 -* E2 be a mapping such that f(tx) is continuous in t for each fixed x. Assume that there exist 0 > 0 and p :$ 1 such that Ilf(x + y) f(x) f(y) II 1, lIf(x) T(x)II 1 was solved by Z. Gajda using a similar approach [1]. This problem was further considered in [4]. In this article we shall give a very simple example to show that a stability theorem cannot be proved for p = 1 . We shall show that there exists a continuous real-valued function f: IR -* R satisfying If(x + y) f(x) f(PA) ? Ixl + IYI, Received by the editors July 3, 1990. 1980 Mathematics Subject Classification (1985 Revision). Primary 47H15.

407 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the convergence properties of the self-organizing feature map algorithm for a simple, but very instructive case: the formation of a topographic representation of the unit interval [0, 1] by a linear chain of neurons.
Abstract: We investigate the convergence properties of the self-organizing feature map algorithm for a simple, but very instructive case: the formation of a topographic representation of the unit interval [0,1] by a linear chain of neurons. We extend the proofs of convergence of Kohonen and of Cottrell and Fort to hold in any case where the neighborhood function, which is used to scale the change in the weight values at each neuron, is a monotonically decreasing function of distance from the winner neuron. We prove that the learning dynamics cannot be described by a gradient descent on a single energy function, but may be described using a set of potential functions, one for each neuron, which are independently minimized following a stochastic gradient descent. We derive the correct potential functions for the oneand multi-dimensional case, and show that the energy functions given by Tolat (1990) are an approximation which is no longer valid in the case of highly disordered maps or steep neighborhood functions.

360 citations


Journal ArticleDOI
TL;DR: It is shown that use of B-spline receptive field functions in conjunction with more general CMAC weight addressing schemes allows higher-order CMAC neural networks to be developed that can learn both functions and function derivatives.
Abstract: The cerebellar model articulation controller (CMAC) neural network is capable of learning nonlinear functions extremely quickly due to the local nature of its weight updating. The rectangular shape of CMAC receptive field functions, however, produces discontinuous (staircase) function approximations without inherent analytical derivatives. The ability to learn both functions and function derivatives is important for the development of many online adaptive filter, estimation, and control algorithms. It is shown that use of B-spline receptive field functions in conjunction with more general CMAC weight addressing schemes allows higher-order CMAC neural networks to be developed that can learn both functions and function derivatives. This also allows hierarchical and multilayer CMAC network architectures to be constructed that can be trained using standard error back-propagation learning techniques. >

298 citations


Proceedings ArticleDOI
01 Jul 1992
TL;DR: This paper begins the investigation of the communication complexity of unconditionally secure multi-party computation, and its relation with various fault-tolerance models, and presents upper and lower bounds on communication, as well as tradeoffs among resources.
Abstract: A secret-ballot vote for a single proposition is an example of a secure distributed computation. The goal is for m participants to jointly compute the output of some n-ary function (in this case, the sum of the votes), while protecting their individual inputs against some form of misbehavior.In this paper, we initiate the investigation of the communication complexity of unconditionally secure multi-party computation, and its relation with various fault-tolerance models. We present upper and lower bounds on communication, as well as tradeoffs among resources.First, we consider the “direct sum problem” for communications complexity of perfectly secure protocols: Can the communication complexity of securely computing a single function f : Fn → F at k sets of inputs be smaller if all are computed simultaneously than if each is computed individually? We show that the answer depends on the failure model. A factor of O(n/log n) can be gained in the privacy model (where processors are curious but correct); specifically, when f is n-ary addition (mod 2), we show a lower bound of O(n2 log n) for computing f O(n) times simultaneously. No gain is possible in a slightly stronger fault model (fail-stop mode); specifically, when f is n-ary addition over GF(q), we show an exact bound of T(kn2 log q) for computing f at k sets of inputs simultaneously (for any k ≥ 1).However, if one is willing to pay an additive cost in fault tolerance (from t to t-k+1), then a variety of known non-cryptographic protocols (including “provably unparallelizable” protocols from above!) can be systematically compiled to compute one function at k sets of inputs with no increase in communication complexity. Our compilation technique is based on a new compression idea of polynomial-based multi-secret sharing.Lastly, we show how to compile private protocols into error-detecting protocols at a big savings of a factor of O(n3) (up to a log factor) over the best known error-correcting protocols. This is a new notion of fault-tolerant protocols, and is especially useful when malicious behavior is infrequent, since error-detection implies error-correction in this case.

272 citations


Journal ArticleDOI
TL;DR: A novel network called the validity index network (VI net), derived from radial basis function networks, fits functions and calculates confidence intervals for its predictions, indicating local regions of poor fit and extrapolation.
Abstract: A novel network called the validity index network (VI net) is presented. The VI net, derived from radial basis function networks, fits functions and calculates confidence intervals for its predictions, indicating local regions of poor fit and extrapolation. >

263 citations


Journal ArticleDOI
TL;DR: In this article, it is shown that it is possible to identify binary threshold crossing models and binary choice models without imposing any parametric structure either on the systematic function of observable exogenous variables or on the distribution of the random term.
Abstract: In this paper, it is shown that it is possible to identify binary threshold crossing models and binary choice models without imposing any parametric structure either on the systematic function of observable exogenous variables or on the distribution of the random term. This identification result is employed to develop a fully nonparametric maximum likelihood estimator for both the function of observable exogenous variables and the distribution of the random term. The estimator is shown to be strongly consistent, and a two step procedure for its calculation is developed. The paper also includes examples of economic models that satisfy the conditions that are necessary to apply the results.

262 citations


Journal ArticleDOI
TL;DR: A representation of gradual inference rules of the form “The more X is F , the more Y is G ” by means of fuzzy sets turns out to be based on a special implication function already considered in multiple-valued logic.

251 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that if a system of differential equations has a generic solution that satisfies a liouvillian relation, that is, there is a LIOUVILLIAN function of several variables vanishing on the curve defined by this solution, then the system has a nonconstant LIOUVM function that is constant on solution curves in some nonempty open set.
Abstract: Liouvillian functions are functions that are built up from rational functions using exponentiation, integration, and algebraic functions. We show that if a system of differential equations has a generic solution that satisfies a liouvillian relation, that is, there is a liouvillian function of several variables vanishing on the curve defined by this solution, then the system has a liouvillian first integral, that is a nonconstant liouvillian function that is constant on solution curves in some nonempty open set. We can refine this result in special cases to show that the first integral must be of a very special form

01 Jan 1992
TL;DR: In this article, the problem can be reduced to a canonical form, which simplifies the underlying problem and designs are constructed for several contexts with a single variable using geometric and other arguments.
Abstract: SUMMARY Optimal experimental designs for non-linear problems depend on the values of the underlying unknown parameters in the model. For various reasons there is interest in providing explicit formulae for the optimal designs as a function of the unknown parameters. This paper shows that, for a certain class of generalized linear models, the problem can be reduced to a canonical form. This simplifies the underlying problem and designs are constructed for several contexts with a single variable using geometric and other arguments.

Proceedings ArticleDOI
04 May 1992
TL;DR: It seems that for moderate problem complexity the optimal population size for problems coded as bitstrings is approximately the length of the string in bits for sequential machines.
Abstract: A description is given of the results of experiments to find the optimum population size for genetic algorithms as a function of problem complexity. It seems that for moderate problem complexity the optimal population size for problems coded as bitstrings is approximately the length of the string in bits for sequential machines. This result is also consistent with earlier experimentation. In parallel architectures the optimal population size is larger than in the corresponding sequential cases, but the exact figures seem to be sensitive to implementation details. >

Journal ArticleDOI
TL;DR: A theoretical means of representing and computing the lineal-path function L(z) for distributions of D-dimensional spheres with arbitrary degree of penetrability using statistical-mechanical concepts is developed and found very good agreement between theory and the Monte Carlo calculations.
Abstract: A fundamental morphological measure of two-phase heterogenous materials is what we refer to as the lineal-path function L(z). This quantity gives the probability that a line segment of length z is wholly in one of the phases, say phase 1, when randomly thrown into the sample. For three-dimensional systems, we observe that L(z) is also equivalent to the area fraction of phase 1 measured from the projected image onto a plane: a problem of long-standing interest in stereology. We develop a theoretical means of representing and computing the lineal-path function L(z) for distributions of D-dimensional spheres with arbitrary degree of penetrability using statistical-mechanical concepts. In order to test our theoretical results, we determined L(z) from Monte Carlo simulations for the case of three-dimensional systems of spheres and found very good agreement between theory and the Monte Carlo calculations.

Posted Content
TL;DR: In this article, it was shown that a standard multilayer feedforward network with a locally bounded piecewise activation function can approximate any continuous function to any degree of accuracy if and only if the network's activation function is not a polynomial.
Abstract: Several researchers characterized the activation function under which multilayer feedforwardnetworks can act as universal approximators. We show that most of all the characterizationsthat were reported thus far in the literature are special cases of the followinggeneral result: a standard multilayer feedforward network with a locally bounded piecewisecontinuous activation function can approximate any continuous function to any degree ofaccuracy if and only if the network's activation function is not a polynomial. We alsoemphasize the important role of the threshold, asserting that without it the last theoremdoes not hold.

Journal ArticleDOI
TL;DR: This paper investigates GA to rapidly sample the most significant portion or portions of the PPD, when very little prior information is available, and addresses the problem of ‘genetic drift’ which causes the finite GAs to converge to one peak or the other when the algorithm is applied to a highly multimodal fitness function with several peaks of nearly the same height.
Abstract: SUMMARY The seismic waveform inversion problem is usually cast into the framework of Bayesian statistics in which prior information on the model parameters is combined with the data and physics of the forward problem to estimate the a posteriori probability density (PPD) in model space. The PPD is a function of an objective or fitness function computed from the observed and synthetic data. In general, the PPD or the fitness function is multimodal and its shape is unknown. Global optimization methods such as simulated annealing (SA) and genetic algorithms (GA) do not require that the shape of the fitness function be known. In this paper, we investigate GA to rapidly sample the most significant portion or portions of the PPD, when very little prior information is available. First, we use a simple three operator (selection, crossover and mutation) GA acting on a randomly chosen finite population of haploid binary coded models. We use plane wave transformed synthetic seismic data and a normalized cross-correlation function [E(m)] in the frequency domain as a fitness function. A moderate value of crossover probability, a low value of mutation probability, a high value of update probability and a proper population size are required to reach very close to the global maximum of the fitness function. Next, with an attempt to accelerate convergence we show that the concepts from simulated annealing can be used in stretching of the fitness function, i.e., we use exp [E(m)/T] rather than E(m) as the fitness function, where T is a control parameter analogous to temperature in simulated annealing. By a schemata analysis, we show that at low temperatures, schemata with above average fitness values are reproduced in large numbers causing a much more rapid convergence of the algorithm. A high value of temperature T assigns nearly equal selection probability to most of the schemata and thus retains diversity among the members of the population. Thus a GA with a step function type cooling schedule (very high temperature in the beginning followed by rapid cooling to a very low temperature) improves the performance dramatically: high values of the fitness function are obtained rapidly using only half as many models as would be required by a conventional GA. Similar performance could also be achieved by first using a high mutation probability and then decreasing the mutation probability to a very low value, while retaining the same low temperature throughout. We also address the problem of ‘genetic drift’ which causes the finite GAs to converge to one peak or the other when the algorithm is applied to a highly multimodal fitness function with several peaks of nearly the same height. A parallel genetic algorithm based on the concept of ‘punctuated equilibria’ is implemented to circumvent the problem. We run several GAs each with a finite subpopulation in parallel and collect many good models from each one of these runs. These are then used to grasp the most significant portion(s) of the PPD in model space. We then compute the weighted mean model and use the derived good models to estimate uncertainty in the derived model parameters.

Journal ArticleDOI
TL;DR: An explicit formula of approximation is proposed which is noise resistant and can be easily modified with the patterns and applied to approach a function defined implicitly, which is useful in control theory.

Journal ArticleDOI
TL;DR: In this paper, a semi-parametric method is developed for assessing publication bias prior to performing a meta-analysis, where summary estimates for the individual studies in the metaanalysis are assumed to have known distributional form.
Abstract: A semi-parametric method is developed for assessing publication bias prior to performing a meta-analysis. Summary estimates for the individual studies in the meta-analysis are assumed to have known distributional form. Selective publication is modeled using a nonparametric weight function, defined on the two-sided p-value scale. The shape of the estimated weight function provides visual evidence of the presence of bias, if it exists, and observed trends may be tested using rank order statistics or likelihood ratio tests. The method is intended as an exploratory technique prior to embarking on a standard meta-analysis.

Book ChapterDOI
01 Sep 1992
TL;DR: In a wide class of situations of uncertainty, the available information concerning the event space can be described as follows: there exists a true probability that is only known to belong to a certain set P of probabilities: moreover, the lower envelope f of P is a belief function, and characterizes P, i.e., P is the set of all probabilities that dominate f.
Abstract: In a wide class of situations of uncertainty, the available information concerning the event space can be described as follows. There exists a true probability that is only known to belong to a certain set P of probabilities: moreover, the lower envelope f of P is a belief function, i.e., a nonadditive measure of a particular type, and characterizes P, i.e., P is the set of all probabilities that dominate f. The effect of conditioning on such situations is examined. The natural conditioning rule in this case is the Bayesian rule. An explicit expression for the Mobius transform phi /sup E/ of f/sup E/ in terms of phi , the transform of f, is found, and an earlier finding that the lower envelope f/sup E/ of P/sup E/ is itself a belief function is derived from it. However, f/sup E/ no longer characterizes P/sup E/ unless f satisfies further stringent conditions that are both necessary and sufficient. The difficulties resulting from this fact are discussed, and suggestions to cope with them are made. >

Journal ArticleDOI
TL;DR: The Coupled Cluster Green's Function (CCGF) method as mentioned in this paper is intimately connected to both Coupled Clustered Linear Response Theory (CCLRT) and the Normal CoupledCluster Method (NCCM).
Abstract: Diagrammatic and Coupled Cluster techniques are used to develop an approach to the single-particle Green's function G which concentrates on G directly rather than first approximating the irreducible self-energy and then solving Dyson's equation. As a consequence the ionization and attachment parts of the Green's function satisfy completely decoupled sets of equations. The proposed Coupled Cluster Green's Function method (CCGF) is intimately connected to both Coupled Cluster Linear Response Theory (CCLRT) and the Normal Coupled Cluster Method (NCCM). These relations are discussed in detail. © 1992 John Wiley & Sons, Inc.

Journal ArticleDOI
TL;DR: In this article, the generalized SU(3) version of the Nambu and Jona-Lasinio model is used to discuss properties of mesons, constituent quarks and vacuum structure as a function of density and temperature in compressed matter.

Journal ArticleDOI
TL;DR: In this paper, the periodic stationary solutions of a model nonlinear evolution equation simulating the propagation of short-wave perturbations in a relaxing medium are studied, and a method for determining the nonlinear interaction between solitary waves is suggested.
Abstract: The periodic stationary solutions of a model nonlinear evolution equation simulating the propagation of short-wave perturbations in a relaxing medium are studied. Solutions expressed by a multiple-valued function are shown to exist. A method for determining the nonlinear interaction between solitary waves is suggested. An example of a collision of solitons is given.

Journal ArticleDOI
TL;DR: In this paper, the authors argue that the most appropriate form for urban population density models is the inverse power function, contrary to conventional practice, which is largely based upon the negative exponential.
Abstract: In this paper, we argue that the most appropriate form for urban population density models is the inverse power function, contrary to conventional practice, which is largely based upon the negative exponential. We first show that the inverse power function has several theoretical properties which have hitherto gone unremarked in the literature. Our main argument, however, is based on the notion that a density function should describe the extent to which the space available for urban development is filled. To this end, we introduce ideas from urban allometry and fractal geometry to demonstrate that the inverse power model is the only function which embodies the fractal property of self-similarity which we consider to be a basic characteristic of urban form and density. In short, we show that the distance parameter a of the inverse power model is a measure of the extent to which space is filled, and that its value is determined by the basic relation D+α=2 where D is the fractal dimension of the city in ques...

Journal ArticleDOI
TL;DR: The convergence behavior of an adaptive feedforward active control system is studied and it is shown that some modes not only converge slowly but also require an excessive control effort for complete convergence.
Abstract: The convergence behavior of an adaptive feedforward active control system is studied. This adjusts the outputs of a number of secondary sources to minimize a cost function comprising a combination of the sum of mean-square signals from a number of error sensors (the control error) and the sum of the mean-square signals fed to the secondary sources (the control effect). A steepest descent algorithm which performs this function is derived and analyzed. It is shown that some modes not only converge slowly but also require an excessive control effort for complete convergence. This ill-conditioned behavior can be controlled by the proper choice of the cost function minimized. Laboratory experiments using a 16-loudspeaker 32-microphone control system to control the harmonic sound in an enclosure are presented. The behavior of the practical system is accurately predicted from the theoretical analysis of the adaptive algorithm. The effect of errors in the assumed transfer matrix used by the steepest descent algorithm is briefly discussed. >

Journal ArticleDOI
TL;DR: It is shown that the numerator coefficients of the optimal approximant satisfy a weighted least squares problem and, on this basis, a two-step iterative algorithm is developed combining a least squares solver with a gradient minimizer.

Journal ArticleDOI
TL;DR: In this paper, a tabulation of all of the two-sphere resistance functions at present needed in investigations of the mechanics of suspensions is presented, where each function is calculated first as a series in inverse powers of the center-to-center separation and then, in order to handle the singular behavior caused by lubrication forces, the asymptotic form which the function takes when the spheres are close is combined with the series expansion into a single expression valid for all separations of the spheres.
Abstract: The resistance functions that relate the forces, couples, and stresslets exerted on ambient fluid by two unequal rigid spheres in low Reynolds number flow are calculated for the case in which the spheres are immersed in an ambient linear flow. In conjuction with earlier works, this paper completes the tabulation of all of the two‐sphere resistance functions at present needed in investigations of the mechanics of suspensions. Each function is calculated first as a series in inverse powers of the center‐to‐center separation, and then, in order to handle the singular behavior caused by lubrication forces, the asymptotic form which the function takes when the spheres are close is combined with the series expansion into a single expression valid for all separations of the spheres.


Journal ArticleDOI
TL;DR: Inflationary models predict a definite, model independent, angular dependence for the three-point correlation function of T/T$ at large angles (greater than Ω(sim 1^\circ$) as mentioned in this paper.
Abstract: Inflationary models predict a definite, model independent, angular dependence for the three-point correlation function of $\Delta T/T$ at large angles (greater than $\sim 1^\circ$) which we calculate. The overall amplitude is model dependent and generically unobservably small, but may be large in some specific models. We compare our results with other models of nongaussian fluctuations.

Journal ArticleDOI
01 Jun 1992
TL;DR: The suitability of genetic algorithms for the model/objective-function/search procedure is presented and the delineation of left ventricular boundaries in apical 4-ehamber echocardiograms is used as an illustrative exemplar.
Abstract: We describe the application of genetic algorithms in model-based image interpretation. The delineation of left ventricular boundaries in apical 4-ehamber echocardiograms is used as an illustrative exemplar. The suitability of genetic algorithms for the model/objective-function/search procedure is presented.

Journal ArticleDOI
TL;DR: In this paper, a duality model of production is developed that permits risk aversion and price uncertainty, and the linear mean-variance framework employed is tractable for empirical research, in contrast to duality models of risk based on a generalized expected utility function.
Abstract: A duality model of production is developed that permits risk aversion and price uncertainty. The linear mean‐variance framework employed is tractable for empirical research, in contrast to duality models of risk based on a generalized expected utility function. The framework is more general than in standard price certainty models while retaining the simplicity needed for empirical research.