scispace - formally typeset
Search or ask a question

Showing papers on "Function (mathematics) published in 1986"


Journal ArticleDOI
TL;DR: In this paper, a constructive theory of randomness for functions, based on computational complexity, is developed, and a pseudorandom function generator is presented, which is a deterministic polynomial-time algorithm that transforms pairs (g, r), where g is any one-way function and r is a random k-bit string, to computable functions.
Abstract: A constructive theory of randomness for functions, based on computational complexity, is developed, and a pseudorandom function generator is presented. This generator is a deterministic polynomial-time algorithm that transforms pairs (g, r), where g is any one-way function and r is a random k-bit string, to polynomial-time computable functions ƒr: {1, … , 2k} → {1, … , 2k}. These ƒr's cannot be distinguished from random functions by any probabilistic polynomial-time algorithm that asks and receives the value of a function at arguments of its choice. The result has applications in cryptography, random constructions, and complexity theory.

2,043 citations


Journal ArticleDOI
TL;DR: The new definition of expectation generalizes the integral of a set-valued function and derives the Lebesgue-dominated convergence type theorem by considering a suitable generalization of the Hausdorff metric.

1,814 citations


Journal ArticleDOI
TL;DR: The problem which is solved is the inversion of this linear functional transformation, that is the following questions are answered: can every line function satisfying suitable regularity conditions be regarded as constructed in this way?
Abstract: When one integrates a function of two variables x,y - a point function f(P) in the plane - subject to suitable regularity conditions along an arbitrary straight line g then one obtains in the integral values F(g), a line function. In Part A of the present paper the problem which is solved is the inversion of this linear functional transformation, that is the following questions are answered: can every line function satisfying suitable regularity conditions be regarded as constructed in this way? If so, is f uniquely known from F and how can f be calculated? In Part B a solution of the dual problem of calculating a line function F(g) from its point mean values f(P) is solved in a certain sense. Finally, in Part C certain generalizations are discussed, prompted by consideration of non-Euclidean manifolds as well as higher dimensional spaces. The treatment of these problems, themselves of interest, gains enhanced importance through the numerous relationships that exist between this topic and the theory of logarithmic and Newtonian potentials. These are mentioned at appropriate places in the text.

726 citations


Journal ArticleDOI
TL;DR: The Secchi disk is a circular white disk that is lowered into a natural body of water by a human observer until it disappears from view as mentioned in this paper, which is a visual measure of the clarity of the water.
Abstract: The Secchi disk is a circular white disk that is lowered into a natural body of water by a human observer until it disappears from view. The depth of disappearance is a visual measure of the clarity of the water. This review examines the physical and physiological basis of the Secchi disk procedure. The theory of the white disk is detailed to show the underlying assumptions and the consequent strengths and limitations of the procedure. The theory shows how to use a calibrated Secchi disk to predict illuminance levels as a function of depth. In particular it is shown how to predict the euphotic depth of a medium. Ten laws of the Secchi disk are stated verbally and in mathematical form. The laws show how variations in properties of the disk and the surrounding light field affect the depth of disappearance of the disk. Theory and examples lead to the following three main conclusions of this paper: (i) the Secchi disk reading zs,, (in meters) yields a quantitative estimate of a single apparent optical property (CX + K) (in meter-') of a natural hydrosol, where (Y is the (photopic) beam attenuation coefficient and K the (photopic) diffuse attenuation coefficient of the medium; (ii) the primary function of a Secchi disk is to provide a simple visual index of water clarity via z,, or a! + K, (iii) to extend the use of the Secchi disk by auxiliary objective electronic measurements of a or of K, or both, is to risk obviating or abusing this primary function.

458 citations


Journal ArticleDOI
TL;DR: In this article, a new mixed integral equation for the pair distribution function of classical fluids is proposed, which interpolates continuously between the soft core mean spherical closure at short distances, and the hypernetted chain closure at large distances.
Abstract: We propose a new mixed integral equation for the pair distribution function of classical fluids, which interpolates continuously between the soft core mean spherical closure at short distances, and the hypernetted chain closure at large distances. Thermodynamic consistency between the virial and compressibility equations of state is achieved by varying a single parameter in a suitably chosen switching function. The new integral equation generalizes a recent suggestion by Rogers and Young to the case of realistic pair potentials containing an attractive part. When compared to available computer simulation data, the new equation is found to yield excellent results for the thermodynamics and pair structure of a wide variety of potential models (including atomic and ionic fluids and mixtures) over an extensive range of temperatures and densities. The equation can also be used to invert structural data to extract effective pair potentials, with reasonable success.

405 citations


Journal ArticleDOI
TL;DR: In this paper, a player chooses a "metastrategy" which specifies her act as a function of her belief, and develops a restriction on the beliefs "off the equilibrium path".

390 citations


Journal ArticleDOI
TL;DR: This paper considers a single-item, periodic-review inventory model with uncertain demands, and shows under a few additional unrestrictive assumptions that a modified basic-stock policy is optimal under the discounted cost criterion, both for finite and infinite planning horizons.
Abstract: This paper considers a single-item, periodic-review inventory model with uncertain demands. We assume a finite production capacity in each period. With stationary data, a convex one-period cost function and a continuous demand distribution, we show under a few additional unrestrictive assumptions that a modified basic-stock policy is optimal under the discounted cost criterion, both for finite and infinite planning horizons. In addition we characterize the optimal base-stock levels in several ways.

354 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the policy function which describes an optimal path can be of any type, for discount factors small enough, and that the stability of the accumulation paths depends on the discount parameter.

349 citations


Patent
Hans Evert Peter Köhler1
07 Mar 1986
TL;DR: In this article, the hand apparatus has a plurality of module keys (101-105) by means of which the user can get access to a specific module, such as a TV-receiver, a teletext decoder, a videorecorder, a VLP-player, an audiosystem, an audiobookuner, a compact disc player, and audiorecorders etc.
Abstract: The hand apparatus has a plurality of module keys (101-105) by means of which the user can get access to a specific module, such as a TV-receiver, a teletext decoder, a videorecorder, a VLP-player, an audiosystem, an audiotuner, a compact disc player, an audiorecorder etc. In addition, a dot-matrix display element (300) is provided around which a plurality of multi-functional keys (301-312) are arranged. The control function of such a key depends on the module key operated and this function is shown on the display element for each of these keys.

260 citations


Journal ArticleDOI
TL;DR: In this article, the long common subsequence problem has been applied to sharpen known variance bounds in the long Common subsequence Problem, where the variance is defined as a function of random variables.
Abstract: If $S(x_1, x_2,\cdots, x_n)$ is any function of $n$ variables and if $X_i, \hat{X}_i, 1 \leq i \leq n$ are $2n$ i.i.d. random variables then $\operatorname{var} S \leq \frac{1}{2} E \sum^n_{i=1} (S - S_i)^2$ where $S = S(X_1, X_2,\cdots, X_n)$ and $S_i$ is given by replacing the $i$th observation with $\hat{X}_i$, so $S_i = S(X_1, X_2,\cdots, \hat{X}_i,\cdots, X_n)$. This is applied to sharpen known variance bounds in the long common subsequence problem.

257 citations



Journal ArticleDOI
TL;DR: It is shown that a ‘continuous’ function from a finite block of lattice points into itself has an ‘almost-fixed’ point P such that f ( P ) is a neighbor of P (diagonal neighbors allowed).

Journal ArticleDOI
TL;DR: In this paper, the authors investigated how the choices of prefilters, noise models, sampling interval, and prediction horizon (i.e., the use of k-step ahead prediction methods) influence the estimate.
Abstract: Estimation of transfer functions of linear systems is one of the most common system identification problems. Several different design variables, chosen by the user for the identification procedure, affect the properties of the resulting estimate. In this paper it is investigated how the choices of prefilters, noise models, sampling interval, and prediction horizon (i.e., the use of k -step ahead prediction methods) influence the estimate. An important aspect is thai the true system is not assumed to be exactly represented within the chosen model set. The estimate will thus be biased. It is shown how the distribution of bias in the frequency domain is governed by a weighting function, which emphasizes different frequency bands. The weighting function, in turn, is a result of the previously listed design variables. It is shown, e.g., thai the common least-squares method has a tendency to emphasize high frequencies, and that this can be counteracted by prefiltering. It is also shown that, asymptotically, it is only the prediction horizon itself, and not how it is split up into sampling interval times number of predicted sampling instants, that affects this weighting function.

Journal ArticleDOI
TL;DR: It is shown that the Chvátal rank of a polyhedron can be bounded above by a function of the matrixA, independent of the vectorb, a result which, as Blair observed, is equivalent to Blair and Jeroslow's theorem that ‘each integer programming value function is a Gomory function.’
Abstract: We consider integer linear programming problems with a fixed coefficient matrix and varying objective function and right-hand-side vector. Among our results, we show that, for any optimal solution to a linear program max{wx: Ax≤b}, the distance to the nearest optimal solution to the corresponding integer program is at most the dimension of the problem multiplied by the largest subdeterminant of the integral matrixA. Using this, we strengthen several integer programming ‘proximity’ results of Blair and Jeroslow; Graver; and Wolsey. We also show that the Chvatal rank of a polyhedron {x: Ax≤b} can be bounded above by a function of the matrixA, independent of the vectorb, a result which, as Blair observed, is equivalent to Blair and Jeroslow's theorem that ‘each integer programming value function is a Gomory function.’

Journal ArticleDOI
TL;DR: In the pure lambda calculus, run-time errors can occur if constants are used improperly, for example, if an at tempt is made to apply a natural number as if it were a function or if the first argument of a conditional is not a truth value as mentioned in this paper.
Abstract: When constants are added to the pure lambda calculus, run-time errors can occur if the constants are used improperly, for example, if an at tempt is made to apply a natural number as if it were a function or if the first argument of a conditional is not a truth value. We consider "types" as somehow being or generating constraints on expressions. A consistent type discipline ensures that any expression satisfying the constraints will not produce a "run-time error."

Journal ArticleDOI
TL;DR: In this article, the authors present a practical computer implementation of a technique which dramatically speeds up the convergence of the infinite series Green's function associated with the Helmholtz operator in the case of periodic structures.


Journal ArticleDOI
TL;DR: It is shown that the algorithm converges to the characteristic as the number of observations tend to infinity, for sufficiently smooth characteristics, the rate of convergence is O(n^{-2/5}) in probability.
Abstract: In this note a discrete-time Hammerstein system is identified. The weighting function of the dynamical subsystem is recovered by the correlation method. The main results concern estimation of the nonlinear memoryless subsystem. No conditions concerning functional form of the transform characteristic of the subsystem are made and an algorithm for estimation of the characteristic is presented.The algorithm is a nonparametric kernel estimate of regression functions calculated from dependent data. It is shown that the algorithm converges to the characteristic as the number of observations tend to infinity. For sufficiently smooth characteristics, the rate of convergence is O(n^{-2/5}) in probability.

Journal ArticleDOI
TL;DR: The results of three experiments demonstrated that the visual system calibrates motion parallax according to absolute-distance information in processing depth, and the mean apparent depths were proportional to the viewing distance at 40 cm and 80 cm but not at 160 cm and 320 cm.
Abstract: The results of three experiments demonstrated that the visual system calibrates motion parallax according to absolute-distance information in processing depth. The parallax was created by yoking the relative movement of random dots displayed on a cathode-ray tube to the movements of the head. In Experiment 1, at viewing distances of 40 cm and 80 cm, observers reported the apparent depth produced by motion parallax equivalent to a binocular disparity of 0.47 degree. The mean apparent depth at 80 cm was 2.6 times larger than at 40 cm. In Experiment 2, again at viewing distances of 40 cm and 80 cm, observers adjusted the extent of parallax so that the apparent depth was 7.0 cm. The mean extent of parallax at 80 cm was 31% of that at 40 cm. In Experiment 3, distances ranged from 40 cm to 320 cm, and a wide range of parallax was used. As distance and parallax increased, the perception of a rigid three-dimensional surface was accompanied by rocking motion; perception of depth was replaced by perception of motion in some trials at 320 cm. Moreover, the mean apparent depths were proportional to the viewing distance at 40 cm and 80 cm but not at 160 cm and 320 cm. Language: en

Journal ArticleDOI
TL;DR: The empirical characteristic function (CF) as mentioned in this paper is the Fourier transform of the sample distribution function, the values of its real and imaginary parts at some real number t are merely sample means of cosine and sine functions of the data, the observations being multiplied by t. The power of the CF test compares favorably with that of competing omnibus tests when the data are continuous.
Abstract: The empirical characteristic function (CF) is the Fourier transform of the sample distribution function. The values of its real and imaginary parts at some real number t are merely sample means of cosine and sine functions of the data, the observations being multiplied by t. Given independent samples from two populations, we develop a test for the two-sample problem which is based on a quadratic form in differences between the respective components of the empirical CFs of the two samples. The power of the CF test compares favorably with that of competing omnibus tests when the data are continuous. In the discrete case the CF procedure is also applicable and quite successful; and in this application it appears to have no competitors.

Journal ArticleDOI
Han-Fu Chen1, Lei Guo1
TL;DR: In this paper, the convergence rate of least square identification for the multidimensional ARMAX model is established under some decaying excitation conditions which are satisfied if both input and output do not grow too fast and the attenuating excitation technique is applied.
Abstract: The strong consistency and the convergence rate of least-squares identification for the multidimensional ARMAX model are established under some decaying excitation conditions which are satisfied if both input and output do not grow too fast and the attenuating excitation technique is applied. The parameter-identification results are applied to adaptive-control systems with a quadratic loss function. The rate of convergence of the loss function to its minimum is also obtained.

Journal ArticleDOI
Hiroshi Inoue1
TL;DR: In this article, a multivariate smooth fitting method using cubic B-spline expansion with equispaced knots is proposed. But the method is not suitable for the case of noisy data.
Abstract: A new method of multivariate smooth fitting of scattered, noisy data using cubic B-splines was developed. An optimum smoothing function was defined to minimize the 𝓁2 norm composed of the data residuals and the first and the second derivatives, which represent the total misfit, fluctuation, and roughness of the function, respectively. The function is approximated by a cubic B‐spline expansion with equispaced knots. The solution can be interpreted in three ways. From the stochastic viewpoint, it is the maximum‐likelihood estimate among the admissible functions under the a priori information that the first and second derivatives are zero everywhere due to random errors, i.e., white noise. From the physical viewpoint, it is the finite‐element approximation for a lateral displacement of a bar or a plate under tension which is pulled to the data points by springs. From a technical viewpoint, it is an improved spline‐fitting algorithm. The additional condition of minimizing the derivative norms stabilizes the l...

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a method to analyze systems in a time scale which is varied depending on the state such as dt/d\tau = s(x) (where t and τ are the actual time scale and that of new one, respectively, and s is the function which they call time scaling function).
Abstract: In this note, we propose a method to analyze systems in a time scale which is varied depending on the state such as dt/d\tau = s(x) (where t and τ are the actual time scale and that of new one, respectively, and s(x) is the function which we call time scaling function). Analysis of the system in the new time scale τ enables us to investigate the intrinsic structure of the system. A linearization problem in the new time scale is formulated as wide-sense feedback equivalence and is solved. It is also shown that the time scaling function which makes the system linear is derived as the solution of differential equations.

Journal ArticleDOI
F H Mladineo1
TL;DR: This algorithm for global optimization uses an arbitrary starting point, requires no derivatives, uses comparatively few function evaluations and is not side-tracked by nearby relative optima, so as to build a gradually closer piecewise-differentiable approximation to the objective function.
Abstract: This algorithm for global optimization uses an arbitrary starting point, requires no derivatives, uses comparatively few function evaluations and is not side-tracked by nearby relative optima. The algorithm builds a gradually closer piecewise-differentiable approximation to the objective function. The computer program exhibits a (theoretically expected) strong tendency to cluster around relative optima close to the global. Results of testing with several standard functions are given.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the mathematical programming problem of finding a subset of a finite-dimensional space where f is an extended real-valued function, and C is an arbitrary subset of the space, and gave necessary and sufficient optimality conditions.
Abstract: We consider the mathematical programming problem: find $\inf \{ f(x)\mid x \in C\} $ where f is an arbitrary extended-real-valued function, and C a subset of a finite dimensional space. We give necessary and sufficient optimality conditions for this problem, generalizing previous results of A. Auslendex (this Journal, 22 (1984), pp. 239–254).

Book ChapterDOI
01 Jan 1986
TL;DR: The basic problem of enumerative combinatorics is that of counting the number of elements of a finite set as discussed by the authors, which is the same as the problem of counting a set of elements.
Abstract: The basic problem of enumerative combinatorics is that of counting the number of elements of a finite set. Usually are given an infinite class of finite sets S i where i ranges over some index set I (such as the nonnegative integers ℕ), and we wish to count the number ƒ(i) of elements of each S i “simultaneously.” Immediate philosophical difficulties arise. What does it mean to “count” the number of elements of S i ? There is no definitive answer to this question. Only through experience does one develop an idea of what is meant by a “determination” of a counting function ƒ(i).

Journal ArticleDOI
01 Nov 1986
TL;DR: The sum formula on F shows that this is really a height on the projective space as mentioned in this paper, which is the height of a function field over k of genus g over k.
Abstract: Let k be a field of zero characteristic, and let F be a function field over k of genus g. We normalize each valuation v on F so that its order group consists of all rational integers, and for elements u1, …, un of F, not all zero, we define the (projective) height asThe sum formula on F shows that this is really a height on the projective space .

Journal ArticleDOI
TL;DR: In this article, linear differential filters, i.e., filters in which the filtered function f and the original function f are connected by a linear differential equation, are studied on a general basis concerning the elliptic operators of second order.
Abstract: Linear differential filters, i.e., filters in which the filtered function f and the original function f are connected by a linear differential equation, are studied on a general basis concerning the elliptic operators of second order. In addition, a particular example of a parabolic filter depending on space and extended to past times is given, and its interest in the context of the large eddy simulation of turbulence is discussed.

Proceedings ArticleDOI
01 Dec 1986
TL;DR: In this paper, it was shown that any square invertible nonlinear system whose inverse is state-free can be turned into a fully linear controllable and observable system by means of dynamic statefeedback and coordinates transformations.
Abstract: The purpose of this paper is to show that any square invertible nonlinear system whose inverse is "state-free" can be turned into a fully linear controllable and observable system by means of dynamic state-feedback and coordinates transformations. A nonlinear system has an inverse which is "state-free" if the value of the input (at time t) can be expressed as a function of the values (at t) of the output and a finite number of its derivatives.

Journal ArticleDOI
TL;DR: This article used multiple-cue probability learning to study the effects of time pressure on judgment, and found that cognitive control deteriorated under time pressure while cognitive matching remained unchanged in complex cue-criterion environments containing curvilinear function forms.