scispace - formally typeset
Search or ask a question

Showing papers on "Function (mathematics) published in 1979"


Journal ArticleDOI
D. H. Kelly1
TL;DR: The spatio-temporal threshold surface for stabilized vision is constructed, and its properties are displayed in terms of the usual frequency parameters; e.g., at low spatial frequencies, the temporal response becomes nearly independent of spatial frequency, while at low temporal frequency, the spatial response becomes independent of temporal frequency.
Abstract: The stabilized contrast-sensitivity function measured at a constant retinal velocity is tuned to a particular spatial frequency, which is inversely related to the velocity chosen The Fourier transforms of these constant-velocity passbands have the same form as retinal receptive fields of various sizes At low velocities, in the range of the natural drift motions of the eye, the stabilized contrast-sensitivity function matches the normal, unstablized result At higher velocities (corresponding to motions of objects in the environment), this curve maintains the same shape but shifts toward lower spatial frequencies The constant-velocity passband is displaced across the spatio-temporal frequency domain in a manner that is almost symmetric about the constant-velocity plane at v = 2 deg/s Interpolating these diagonal profiles by a suitable analytic expression, we construct the spatio-temporal threshold surface for stabilized vision, and display its properties in terms of the usual frequency parameters; eg, at low spatial frequencies, the temporal response becomes nearly independent of spatial frequency, while at low temporal frequencies, the spatial response becomes independent of temporal frequency

761 citations


Journal ArticleDOI
TL;DR: An approach to statistical data analysis which is simultaneously parametric and nonparametric is described, and density-quantile functions, autoregressive density estimation, estimation of location and scale parameters by regression analysis of the sample quantile function, and quantile-box plots are introduced.
Abstract: This article attempts to describe an approach to statistical data analysis which is simultaneously parametric and nonparametric. Given a random sample X 1, …, X n of a random variable X, one would like (1) to test the parametric goodness-of-fit hypothesis H 0 that the true distribution function F is of the form F(x) = F0[(x − μ)/σ)], where F 0 is specified, and (2) when H 0 is not accepted, to estimate nonparametrically the true density-quantile function fQ(u) and score function J(u) = − (fQ)'(u). The article also introduces density-quantile functions, autoregressive density estimation, estimation of location and scale parameters by regression analysis of the sample quantile function, and quantile-box plots.

719 citations


Journal ArticleDOI
TL;DR: In this article, a multi-ordering parameter model for glass transition phenomena has been developed on the basis of nonequilibrium thermodynamics, where the departure from equilibrium is partitioned among the various ordering parameters, each of which is associated with a unique retardation time, giving rise to the well-known nonlinear effects observed in volume and enthalpy recovery.
Abstract: A multiordering parameter model for glass-transition phenomena has been developed on the basis of nonequilibrium thermodynamics. In this treatment the state of the glass is determined by the values of N ordering parameters in addition to T and P; the departure from equilibrium is partitioned among the various ordering parameters, each of which is associated with a unique retardation time. These times are assumed to depend on T, P, and on the instantaneous state of the system characterized by its overall departure from equilibrium, giving rise to the well-known nonlinear effects observed in volume and enthalpy recovery. The contribution of each ordering parameter to the departure and the associated retardation times define the fundamental distribution function (the structural retardation spectrum) of the system or, equivalently, its fundamental material response function. These, together with a few experimentally measurable material constants, completely define the recovery behavior of the system when subjected to any thermal treatment. The behavior of the model is explored for various classes of thermal histories of increasing complexity, in order to simulate real experimental situations. The relevant calculations are based on a discrete retardation spectrum, extending over four time decades, and on reasonable values of the relevant material constants in order to imitate the behavior of polymer glasses. The model clearly separates the contribution of the retardation spectrum from the temperature-structure dependence of the retardation times which controls its shifts along the experimental time scale. This is achieved by using the natural time scale of the system which eliminates all the nonlinear effects, thus reducing the response function to the Boltzmann superposition equation, similar to that encountered in the linear viscoelasticity. As a consequence, the system obeys a rate (time) -temperature reduction rule which provides for generalization within each class of thermal treatment. Thus the model establishes a rational basis for comparing theory with experiment, and also various kinds of experiments between themselves. The analysis further predicts interesting features, some of which have often been overlooked. Among these are the impossibility of extraction of the spectrum (or response function) from experiments involving cooling from high temperatures at finite rate; and the appearance of two peaks in the expansion coefficient, or heat capacity, during the heating stage of three-step thermal cycles starting at high temperatures. Finally, the theory also provides a rationale for interpreting the time dependence of mechanical or other structure-sensitive properties of glasses as well as for predicting their long-range behavior.

663 citations


Journal ArticleDOI
TL;DR: It is shown that there is no long term correlation between the abundance of a species and its rates of increase and the number of consumer species cannot exceed thenumber of resources plus distinct nonlinearities.
Abstract: A community which would not reach a stable equilibrium may nevertheless persist if there is temporal variation and nonlinear dynamics. A procedure is introduced for taking time averages of the rates of change. Since the average of a nonlinear function is not the function of the average, higher terms such as the variances of resources or covariances among species and environmental factors enter into the coexistence conditions. These measures behave as if they were resources. Therefore the number of consumer species cannot exceed the number of resources plus distinct nonlinearities. The nonlinearities arise from predator saturation, learning, group hunting, multiple nutritional requirements, or seasonally variable feeding rates. It is shown that there is no long term correlation between the abundance of a species and its rates of increase.

452 citations


Journal ArticleDOI
TL;DR: In this article, the authors developed methods for exploring the resolving power of the least squares method for solving geophysical inverse problems, and applied it to synthetic data for the inverse geophysical edge effect problem.
Abstract: Summary. The recent, but by now classical method for dealing with non-uniqueness in geophysical inverse problems is to construct linear averages of the unknown function whose values are uniquely defined by empirical data (Backus & Gilbert). However, the usefulness of such linear averages for making geophysical inferences depends on the good behaviour of the unknown function in the region in which it is averaged. The assumption of good behaviour, which is implicit in the acceptance of a given average property, is equivalent to the use of a priori information about the unknown function. There are many cases in which such a priori information may be expressed quantitatively and incorporated in the analysis from the very beginning. In these cases, the classical least-squares method may be used both to estimate the unknown function and to provide meaningful error estimates. In this paper I develop methods for exploring the resolving power in such cases. For those problems in which a continuous unknown function is represented by a finite number of‘layer averages’, the ultimately achievable resolving width is simply the layer thickness, and perfectly rectangular resolving kernels of greater width are achievable. The method is applied to synthetic data for the inverse‘gravitational edge effect’problem where yi are data, f(z) is an unknown function, and ei are random errors. Results are compared with those of Parker, who studied the same problem using the Backus—Gilbert approach.

418 citations


Journal ArticleDOI
TL;DR: The truncated Chebyshev polynomial provides a reliable scheme for the automatic determination of empirical weights for least-squares structure refinement when the errors are a function of |Fo| as discussed by the authors.
Abstract: The truncated Chebyshev polynomial provides a reliable scheme for the automatic determination of empirical weights for least-squares structure refinement when the errors are a function of |Fo|.

407 citations


Book ChapterDOI
16 Jul 1979
TL;DR: By providing more sophisticated well-founded sets, the corresponding termination functions can be simplified.
Abstract: A common tool for proving the termination of programs is the well-founded set, a set ordered in such a way as to admit no infinite descending sequences. The basic approach is to find a termination function that maps the values of the program variables into some well-founded set, such that the value of the termination function is continually reduced throughout the computation. All too often, the termination functions required are difficult to find and are of a complexity out of proportion to the program under consideration. However, by providing more sophisticated well-founded sets, the corresponding termination functions can be simplified.

353 citations


Journal ArticleDOI
TL;DR: In this paper, the variation of the wind profile power-law exponent with respect to changes in surface roughness and atmospheric stability is depicted using the formulation of Nickerson and Smiley for specifying the vertical variations of the horizontal wind.

299 citations


Journal ArticleDOI
TL;DR: In this paper, an efficient method was developed to evaluate the function w(z) = e − z 2 (1+(2 i /√ π )∫ z 0 e t 2 dt ) for the complex argument z = x + iy.
Abstract: An efficient method is developed to evaluate the function w ( z )= e - z 2 (1+(2 i /√ π )∫ z 0 e t 2 dt ) for the complex argument z = x + iy . The real part of w(z) is the Voigt function describing spectral line profiles; the imaginary part can be used to compute derivatives of the spectral line shapes, which are useful, e.g. in least-squares fitting procedures. As an example of the method a simple and fast FORTRAN subroutine is listed in the Appendix from which w(z) in the entire y ⩾ 0 half-plane can be calculated, the maximum relative error being less than 2 × 10 -6 and 5 × 10 -6 for the real and imaginary parts, respectively.

276 citations


Journal ArticleDOI
TL;DR: In this article, a semiclassical formula for the Wigner function W(q, p, f) describing the evolution in the two-dimensional phase space qp of a nonstationary quantum state $(q, i) for a system with one degree of freedom was derived.
Abstract: We derive a semiclassical formula for the Wigner function W(q, p, f) describing the evolution in the two-dimensional phase space qp of a nonstationary quantum state $(q, i) for a system with one degree of freedom The initial state $(q, 0) corresponds to a family of classical orbits represented by a curve V0 in qp Under the classical motion Vo evolves into a curve V,; we show that the region where W is large hugs V, in an adiabatic fashion, and that W has semiclassical oscillations depending only on the geometry of (e, and neighbouring curves As t + CO, V, can get very complicated, and we classify its convolutions as 'whorls' and 'tendrils', associated respectively with stable and unstable classical motion In these circumstances the quantum function W cannot resolve the details of V,, and at time f, there is a transition to new regimes, for which we make predictions about the morphology of $ from the way V, fills regions of phase space as t-r CO The regimes associated with whorls and tendrils are different We expect f, = O(h-2'3) for whorls and I, = O(ln h-') for tendrils

183 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed a general theory for numerical evaluation of integrals of the Hankel type and showed that the absolute error on the output function is less than (K(ω 0)/r) · exp (−ρω 0/Δ), Δ being the logarthmic sampling distance.
Abstract: Inspired by the linear filter method introduced by D. P. Ghosh in 1970 we have developed a general theory for numerical evaluation of integrals of the Hankel type: Replacing the usual sine interpolating function by sinsh (x) =a· sin (ρx)/sinh (aρx), where the smoothness parameter a is chosen to be “small”, we obtain explicit series expansions for the sinsh-response or filter function H*. If the input function f(λ exp (iω)) is known to be analytic in the region o < λ < ∞, |ω|≤ω0 of the complex plane, we can show that the absolute error on the output function is less than (K(ω0)/r) · exp (−ρω0/Δ), Δ being the logarthmic sampling distance. Due to the explicit expansions of H* the tails of the infinite summation ((m−n)Δ) can be handled analytically. Since the only restriction on the order is ν > − 1, the Fourier transform is a special case of the theory, ν=± 1/2 giving the sine- and cosine transform, respectively. In theoretical model calculations the present method is considerably more efficient than the Fast Fourier Transform (FFT).


Patent
29 Jun 1979
TL;DR: In this article, a hardware logic simulation machine comprised of an array of specially designed parallel processors, with there being no theoretical limit to the number of processors which may be assembled into the array.
Abstract: A hardware logic simulation machine comprised of an array of specially designed parallel processors, with there being no theoretical limit to the number of processors which may be assembled into the array. Each processor executes a logic simulation function wherein the logic subnetwork simulated by each processor is implicitly described by a program loaded into each processor instruction memory. Logic values simulated by one processor are communicated to other processors by a switching mechanism controlled by a controller. If the array consists of i processor addresses, the switch is a full i-by-i way switch. Each processor is operated in parallel, and the major component of each processor is a first set of two memory banks for storing the simulated logic values associated with the output of each logic block. A second set of two memory banks are included in each processor for storing logic simulations from other processors to be combined with the logic simulation stored in the first set of memory banks.

Journal ArticleDOI
TL;DR: In this paper, a theory for the memory function of the velocity autocorrelation function for a monatomic liquid is presented, based on a general kinetic approach, and strong emphasis is put on the coupling of the motion of a single particle to the collective motions in its surrounding.
Abstract: A theory for the memory function of the velocity autocorrelation function for a monatomic liquid is presented, based on a general kinetic approach. Strong emphasis is put on the coupling of the motion of a single particle to the collective motions in its surrounding, splitting the memory function into an essentially binary collision part and a more collective tail, including recollisions to all orders. Numerical comparisons with molecular dynamics data on liquid rubidium have shown quite remarkable agreement.


Proceedings ArticleDOI
TL;DR: Results of computer simulated pressure buildup analysis indicate that the use of TA(P) provides satisfactory values of computed fracture lengths in fractured gas wells, and its application is demonstrated by means of example problems.
Abstract: A new time function has been defined which considers variations of gas and compressibility as a function of pressure, which in turn is a function of time. This function appears to be similar to the real gas pseudo-pressure, M(P) of Al-Hussainy et al., which takes into account the variations of gas viscosity and Z-factor as a function of pressure. However, this is an approximate function as opposed to M(P). This time function is referred to in this work as the real gas pseudo-time, TA(P). This function has aided in post-treatment pressure buildup analysis of fractured (including MHF) gas wells by type-curve analysis. Results of computer simulated pressure buildup analysis indicate that the use of TA(P) provides satisfactory values of computed fracture lengths in fractured gas wells. In this work, the real gas pseudo-time is described and its application is demonstrated by means of example problems. Although the discussion in this study is limited to pressure buildup analysis of vertically fractured gas wells. The utility of this function is not meant to be restricted to such wells only. 10 references.

Journal ArticleDOI
TL;DR: In this paper, a self-correcting point process is modelled by making the instantaneous rate of t of the process a suitable function of n −ρ t, n being the number of points in [0, t ].

Book ChapterDOI
TL;DR: In this article, the concepts of continuity and condensability are defined for belief functions, and it is shown how to extend continuous or condensable belief functions from an algebra of subsets to the corresponding power set.
Abstract: This paper studies belief functions, set functions which are normalized and monotone of order 8. The concepts of continuity and condensability are defined for belief functions, and it is shown how to extend continuous or condensable belief functions from an algebra of subsets to the corresponding power set. The main tool used in this extension is the theorem that every belief function can be represented by an allocation of probability i.e., by a n-homomorphism into a positive and completely additive probability algebra. This representation can be deduced either from an integral representation due to Choquet or from more elementary work by Revuz and Honeycutt.

Journal ArticleDOI
TL;DR: Good fitting of experimental data have been achieved very conveniently and accurately by this method, and the statistical standard errors of the anisotropy deca parameters have been found to be smaller than the standard errors previously calculated for the moment method.

Journal ArticleDOI
D. A. Turner1
TL;DR: Much of the notation used in logic and mathematics can be recast in the following uniform syntax: a term is either an atom or a variable or else it is of the form A B where A and B are terms and their juxtaposition denotes the application of a monadic function to its argument.
Abstract: Much of the notation used in logic and mathematics can be recast in the following uniform syntax. A term is either an atom (i.e. a constant or a variable) or else it is of the form A B where A and B are terms and their juxtaposition denotes the application of a monadic function (on the left) to its argument (on the right). We may use parentheses to resolve ambiguity together with the convention that, in the absence of parentheses, juxtaposition associates to the left. So in A B C the function A is being applied to B and the result, itself presumably a function, is being applied to C. Functions of several arguments are adapted to this uniformly monadic syntax by replacing them by appropriately defined higher order functions of one argument. So for example a first-order dyadic function, d say, is replaced by a second-order monadic function d' defined so that d' x y always has the same value as d(x, y). (Note. This transformation is called "currying" after the logician H. B. Curry', thus we would say that d' is here a curried version of d.) In addition to operations which can be construed as the application of a function to arguments, however, our customary notations also use constructions of an apparently very different character, namely those which introduce bound variables (e.g. the quantifiers, the integration sign, Church's A). These too can be reduced to the above form provided that we can give a positive solution to the following problem. Given a term, X say, containing zero or more occurrences of the variable x, can we write down a term denoting the funtionf with the property (1) below?

Journal ArticleDOI
TL;DR: In this paper, interval analysis is used to compute the minimum value of a twice continuously differentiable function of one variable over a closed interval, and when both the first and second derivatives of the function have a finite number of isolated zeros, their method never fails to find the global minimum.
Abstract: We show how interval analysis can be used to compute the minimum value of a twice continuously differentiable function of one variable over a closed interval. When both the first and second derivatives of the function have a finite number of isolated zeros, our method never fails to find the global minimum.

Journal ArticleDOI
TL;DR: A fast and simple iterative method for the determination of a single real root of a real continuous function based upon linearizing the original function and the regula falsi is applied to this modified function which leads to a very simple algorithm.
Abstract: A fast and simple iterative method is proposed for the determination of a single real root of a real continuous function. The idea is based upon linearizing the original function whereafter the regula falsi is applied to this modified function which leads to a very simple algorithm. The rate of convergence is shown to be quadratic or better.

Journal ArticleDOI
TL;DR: Asymptotically coincident upper and lower bounds on the exponent of the largest possible probability of the correct decoding of block codes are given for all rates above capacity.
Abstract: Asymptotically coincident upper and lower bounds on the exponent of the largest possible probability of the correct decoding of block codes are given for all rates above capacity. The lower bound sharpens Omura's bound. The upper bound is proved by a new and simple combinatorial argument.

Journal ArticleDOI
TL;DR: In this paper, the authors derived uniform convergence bounds and uniform consistency on bounded intervals for the Nadaraya-Watson kernel estimator and its derivatives, and the corresponding convergence results for the Priestly-Chao estimator in the case that the domain points are nonrandom.
Abstract: The objective in nonparametric regression is to infer a function $m(x)$ on the basis of a finite collection of noisy pairs $\{(X_i, m(X_i) + N_i)\}^n_{i=1}$, where the noise components $N_i$ satisfy certain lenient assumptions and the domain points $X_i$ are selected at random. It is known a priori only that $m$ is a member of a nonparametric class of functions (that is, a class of functions like $C\lbrack 0, 1\rbrack$ which, under customary topologies, does not admit a homeomorphic indexing by a subset of a Euclidean space). The main theoretical contribution of this study is to derive uniform convergence bounds and uniform consistency on bounded intervals for the Nadaraya-Watson kernel estimator and its derivatives. Also, we obtain the corresponding convergence results for the Priestly-Chao estimator in the case that the domain points are nonrandom. With these developments we are able to apply nonparametric regression methodology to the problem of identifying noisy time-varying linear systems.

Journal ArticleDOI
TL;DR: The number of registers required for evaluating arithmetic expressions, a parameter of binary trees appears in various computer science problems as well as in numerous natural sciences applications where it is known as the Strahler number.

Journal ArticleDOI
TL;DR: In this paper, it was shown that a solution is efficient if and only if it solves an optimization problem that bounds the various criteria values from below and maximizes a strictly increasing function of these several criteria values.
Abstract: In the context of deterministic multicriteria maximization, a Pareto optimal, nondominated, or efficient solution is a feasible solution for which an increase in value of any one criterion can only be achieved at the expense of a decrease in value of at least one other criterion. Without restrictions of convexity or continuity, it is shown that a solution is efficient if and only if it solves an optimization problem that bounds the various criteria values from below and maximizes a strictly increasing function of these several criteria values. Also included are discussions of previous work concerned with generating or characterizing the set of efficient solutions, and of the interactive approach for resolving multicriteria optimization problems. The final section argues that the paper's main result should not actually be used to generate the set of efficient solutions, relates this result to Simon's theory of satisficing, and then indicates why and how it can be used as the basis for interactive procedures with desirable characteristics.

01 Jan 1979
TL;DR: In this article, interval analysis is used to compute the minimum value of a twice continuously differentiable function of one variable over a closed interval, and it is shown that if both the first and second deriva-tives of the function have a finite number of isolated zeros, their method never fails to find the global minimum.
Abstract: We show how interval analysis can be used to compute the minimum value of a twice continuously differentiable function of one variable over a closed interval. When both the first and second deriva- tives of the function have a finite number of isolated zeros, our method never fails to find the global minimum. Consider a function f(x) in C 2. We shall describe a method for computing the minimum value of f(x) on a closed interval (a, b). We shall see that, if f'(x) and f"(x) have only a finite number of isolated zeros, our method always converges. In a subsequent paper, we shall show how the method can be extended to the case in which x is a vector of more than one variable. Moreover, it will be extended to the constrained case, and a modified method will remove the differentiability condition. The present paper serves to introduce the necessary ideas. In practice, we can only compute minima in a bounded interval. Hence, it is no (additional) restriction to confine our attention to a closed interval. The term global minimum used herein refers to the fact that we find the smallest value of f(x) throughout (a, b). We shall not mistake a local minimum for the global one. Indeed, our method will usually not find local minima, unless forced to do so. Its efficiency would then be degraded if it did. In our method, we iteratively delete subintervals of (a, b) until the remaining set is sufficiently small. These subintervals consist of points at which either f(x) is proved to exceed the minimum in value or else the derivative is proved to be nonzero.

Journal ArticleDOI
TL;DR: An interval arithmetic method is described for finding the global maxima or minima of multivariable functions, and the lower and the upper bounds of the interval expression of the function are estimated on each subregion.
Abstract: An interval arithmetic method is described for finding the global maxima or minima of multivariable functions. The original domain of variables is divided successively, and the lower and the upper bounds of the interval expression of the function are estimated on each subregion. By discarding subregions where the global solution can not exist, one can always find the solution with rigorous error bounds. The convergence can be made fast by Newton's method after subregions are grouped. Further, constrained optimization can be treated using a special transformation or the Lagrange-multiplier technique.

Book ChapterDOI
TL;DR: In this article, the authors discussed small-angle scattering experiments with particles in solution and showed that the correlation between the distance distribution function and the structure of the particle is also discussed.
Abstract: Publisher Summary This chapter discusses small-angle scattering experiments with particles in solution—i.e., the particles are nonoriented. A large number of particles contribute to the scattering and the resulting spatial average leads to a loss in information. The information contained in the three-dimensional electron density distribution is thereby reduced to the one-dimensional distance distribution function. This function is proportional to the number of lines with length, which connect any volume element i with any volume element k of the same particle. The spatial orientation of these connection lines is of no account to the function. The connection lines are weighted by the product of the number of electrons situated in the volume elements i and k , respectively. The correlation between the function and the structure of the particle is also discussed in the chapter. The connection between the distance distribution function and the measured experimental scattering curve is also shown. It is observed that the each distance between two electrons of the particle, which is part of the function, leads to an angular-dependent scattering intensity. This physical process of scattering can be mathematically expressed by a Fourier transformation, which defines the way in which the information in “real space” (distance distribution function) is transformed into “reciprocal space” (scattering function). The chapter also discusses monochromatization and the camera type developed in Graz.

Journal ArticleDOI
TL;DR: It is shown that a condition sufficient for NP-completeness is that the function x Λ ~ y be representable, and that any set of connectives not capable of representing this function has a polynomial-time satisfiability problem.
Abstract: For each fixed set of Boolean connectives, how hard is it to determine satisfiability for formulas with only those connectives? We show that a condition sufficient for NP-completeness is that the functionx Λ ~ y be representable, and that any set of connectives not capable of representing this function has a polynomial-time satisfiability problem.