scispace - formally typeset
Search or ask a question

Showing papers on "Function (mathematics) published in 1983"


Journal ArticleDOI
TL;DR: In this article, the propagation of phase and irradiance are derived, and a Green's function solution for the phase in terms of irradiance and perimeter phase values is given A measurement scheme is discussed, and the results of a numerical simulation are given Both circular and slit pupils are considered.
Abstract: Equations for the propagation of phase and irradiance are derived, and a Green’s function solution for the phase in terms of irradiance and perimeter phase values is given A measurement scheme is discussed, and the results of a numerical simulation are given Both circular and slit pupils are considered An appendix discusses the local validity of the parabolic-wave equation based on the factorized Helmholtz equation approach to the Rayleigh–Sommerfeld and Fresnel diffraction theories Expressions for the diffracted-wave field in the near-field region are given

1,310 citations


DOI
01 Feb 1983
TL;DR: In this paper, the problem of minimizing a real scalar quantity (for example array output power, or mean square error) as a function of a complex vector (the set of weights) frequently arises in adaptive array theory.
Abstract: The problem of minimising a real scalar quantity (for example array output power, or mean square error) as a function of a complex vector (the set of weights) frequently arises in adaptive array theory. A complex gradient operator is defined in the paper for this purpose and its use justified. Three examples of its application to array theory problems are given.

699 citations


Journal ArticleDOI
TL;DR: It is proved that the minimum of a biconcave function over a nonempty compact set occurs at a boundary point of the set and not necessarily an extreme point and the algorithm is proven to converge to a global solution of the nonconvex program.
Abstract: This paper presents a branch-and-bound algorithm for minimizing the sum of a convex function in x, a convex function in y and a bilinear term in x and y over a closed set. Such an objective function is called biconvex with biconcave functions similarly defined. The feasible region of this model permits joint constraints in x and y to be expressed. The bilinear programming problem becomes a special case of the problem addressed in this paper. We prove that the minimum of a biconcave function over a nonempty compact set occurs at a boundary point of the set and not necessarily an extreme point. The algorithm is proven to converge to a global solution of the nonconvex program. We discuss extensions of the general model and computational experience in solving jointly constrained bilinear programs, for which the algorithm has been implemented.

566 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a survey of the literature on tree-based network location problems, focusing on single objective function problems, where the objective is to minimize either a sum of transport costs proportional to network travel distances between existing facilities and closest new facilities, or a maximum of "losses" proportional to such travel distances, or the total number of new facilities to be located.
Abstract: Network location problems occur when new facilities are to be located on a network. The network of interest may be a road network, an air transport network, a river network, or a network of shipping lanes. For a given network location problem, the new facilities are often idealized as points, and may be located anywhere on the network; constraints may be imposed upon the problem so that new facilities are not too far from existing facilities. Usually some objective function is to be minimized. For single objective function problems, typically the objective is to minimize either a sum of transport costs proportional to network travel distances between existing facilities and closest new facilities, or a maximum of "losses" proportional to such travel distances, or the total number of new facilities to be located. There is also a growing interest in multiobjective network location problems. Of the approximately 100 references we list, roughly 60 date from 1978 or later; we focus upon work which deals directly with the network of interest, and which exploits the network structure. The principal structure exploited to date is that of a tree, i.e., a connected network without cycles. Tree-like networks may be encountered when having cycles is very expensive, as with portions of interstate highway systems. Further, simple distribution systems with a single distributor at the "hub" can often be modeled as star-like trees. With trees, "reasonable" functions of distance are often convex, whereas for a cyclic network such functions of distance are usually nonconvex. Convexity explains, to some extent, the tractability of tree network location problems.

361 citations


Journal ArticleDOI
TL;DR: The class of one-dimensional stretching functions used in finite-difference calculations is studied in this paper, for solutions containing a highly localized region of rapid variation, simple criteria for a stretching function are derived using a truncation error analysis.

282 citations


Journal ArticleDOI
01 Feb 1983
TL;DR: In this paper, the first six coefficients of a function which is inverse to a regular normalized univalent function whose derivative has a positive real part in the unit disk are used to find sharp bounds.
Abstract: Coefficient bounds for functions with a positive real part are used in a rather novel way to find sharp bounds for the first six coefficients of a function which is inverse to a regular normalized univalent function whose derivative has a positive real part in the unit disk.

234 citations


Journal ArticleDOI
TL;DR: The geodesic distance d x provides rigorous definitions of topological transformations, which can be performed by automatic image analysers with the help of parallel iterative algorithms.

215 citations


Journal ArticleDOI
TL;DR: The application of optimal control theory to life history evolution in species with discrete breeding seasons and overlapping generations is discussed and monocarpy is the optimal strategy.
Abstract: The application of optimal control theory to life history evolution in species with discrete breeding seasons and overlapping generations is discussed. For each age class, the objective functional maximized consists of an integral (total reproduction for that age class) plus a final function (residual reproductive value). A simple example, for which monocarpy is the optimal strategy, is given. The present results complement previous studies (e.g., Schaffer 1979) of life history evolution as a problem in static optimization. The works of Leon (1976), who applied control theory to the case of species with continuous reproduction, and Mirmirani and Oster (1978), who did the same organisms with annual life histories, are thus extended.

212 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used classical probability theory to derive expressions for the expected (or mean) value of quantities such as the irradiation on inclined surfaces, collector output, and net gain through windows.

208 citations


Book ChapterDOI
TL;DR: In this article, the authors consider the problem of tracking a Brownian motion by a process of bounded variation, in such a way as to minimize total expected cost of both "action" and "deviation from a target state 0".
Abstract: We consider the problem of tracking a Brownian motion by a process of bounded variation, in such a way as to minimize total expected cost of both ‘action' and ‘deviation from a target state 0'. The former is proportional to the amount of control exerted to date, while the latter is being measured by a function which can be viewed, for simplicity, as quadratic. We discuss the discounted, stationary and finite-horizon variants of the problem. The answer to all three questions takes the form of exerting control in a singular manner, in order not to exit from a certain region. Explicit solutions are found for the first and second questions, while the third is reduced to an appropriate optimal stopping problem. This reduction yields properties, as well as global upper and lower bounds, for the associated moving boundary. The pertinent Abelian and ergodic relationships for the corresponding value functions are also derived.

197 citations


Journal ArticleDOI
TL;DR: In this article, the authors design and analyze an algorithm which realizes both asymptotic bounds simultaneously and makes possible a completely general implementation as a Fortran subroutine or even as a six-head finite automaton.

Journal ArticleDOI
01 Nov 1983-Nature
TL;DR: In this paper, a simple classification of linear carbon polytypes with sp-configuration and carbon-carbon chains, either conjugated triple bonded or cumulated double bonded, parallel to the c-axis is proposed.
Abstract: Compounds obtained by high-temperature treatment of graphite are thought to be linear carbon polytypes with sp-configuration and carbon–carbon chains, either conjugated triple bonded or cumulated double bonded, parallel to the c-axis. At least 10 crystallographically documented linear carbon forms (carbynes) have been reported. To relate the contradictory and confusing information found in the literature, a simple classification is suggested here based on a linear relationship between number of atoms in a chain (n) and the unit cell parameters a0 and c0 of all carbyne forms known. It is assumed that chains are kinked, and that the distribution of kinked spacings may be a function of the temperature of formation of the carbyne forms in question.

Journal ArticleDOI
TL;DR: In this paper, a new method for multiloop calculations that provides an algebraic procedure to evaluate the renormalization group functions up to five loops is presented, and a final analytical expression for the five-loop β -function in the ϕ 4 theory is given.

Journal ArticleDOI
TL;DR: In extensive computational tests, a tensor algorithm is significantly more efficient than a similar algorithm based on the standard linear model, both on standard nonsingular test problems and on problems where the Jacobian at the solution is singular.
Abstract: A new class of methods for solving systems of nonlinear equations, called tensor methods, is introduced. Tensor methods are general purpose methods intended especially for problems where the Jacobian matrix at the solution is singular or ill-conditioned. They base each iteration on a quadratic model of the nonlinear function, the standard linear model augmented by a simple second order term. The second order term is selected so that the model interpolates function values from several previous iterations, as well as the current function value and Jacobian. The tensor method requires no more function and derivative information per iteration and hardly more storage or arithmetic per iteration, than a standard method based on Newton’s method. In extensive computational tests, a tensor algorithm is significantly more efficient than a similar algorithm based on the standard linear model, both on standard nonsingular test problems and on problems where the Jacobian at the solution is singular.


Journal ArticleDOI
TL;DR: In this article, a Monte Carlo method for calculating quantum mechanical time correlation functions is presented, where the time correlation function is calculated at several values along the pure imaginary axis of the complex time plane such that 0 < it < β, where β is the temperature of the system.
Abstract: A Monte Carlo method for calculating quantum mechanical time correlation functions is presented. In this method the time correlation function is calculated at several values along the pure imaginary axis of the complex time plane such that 0

Journal ArticleDOI
TL;DR: In this paper, the Katzenelson algorithm is applied to the global piecewise-linear equation in the canonical form, where the only nonlinear elements are two-terminal resistors and controlled sources, each modeled by a one-dimensional piecewise linear function.
Abstract: Any continuous resistive nonlinear circuit can be approximated to any desired accuracy by a global piecewise-linear equation in the canonical form a + B x + \sum_{i=1}^{p}c_{i} |\langle \alpha_{i}, x \rangle - \beta_{i}|= 0 . All conventional circuit analysis methods (nodal, mesh, cut set, loop, hybrid, modified nodal, tableau) are shown to always yield an equation of this form, provided the only nonlinear elements are two-terminal resistors and controlled sources, each modeled by a one-dimensional piecewise-linear function. The well-known Katzenelson algorithm when applied to this equation yields an efficient algorithm which requires only a minimal computer storage. In the important special case when the canonical equation has a lattice structure (which always occur in the hybrid analysis), the algorithm is further refined to achieve a dramatic reduction in computation time.

Patent
09 May 1983
TL;DR: Pass transistors as mentioned in this paper are used to reduce the layout complexity of logic circuits by using PASS transistors connected to pass a first and second input function to an output node in response to selected CONTROL signals, thereby to generate a selected output function on the output node.
Abstract: PASS transistors are used to reduce the layout complexity of logic circuits by using PASS transistors connected to pass a first and second input function to an output node in response to selected CONTROL signals, thereby to generate a selected output function on the output node. The PASS transistor comprises a transistor capable of passing an input function in response to a CONTROL signal applied to the transistor thereby to generate an output function related to the input function. In general, the input function comprises less than all of a set of input variables and the CONTROL function comprises one or more of the remainder of the set of input variables.

Journal ArticleDOI
TL;DR: This work presents an algorithm for computing a set of intervals to be used in a forward-difference approximation of the gradient, and shows how certain “standard” choices for the finite-Difference interval may lead to poor derivative approximations for badly scaled problems.
Abstract: When minimizing a smooth nonlinear function whose derivatives are not available, a popular approach is to use a gradient method with a finite-difference approximation substituted for the exact gradient In order for such a method to be effective, it must be possible to compute “good” derivative approximations without requiring a large number of function evaluations Certain “standard” choices for the finite-difference interval may lead to poor derivative approximations for badly scaled problems We present an algorithm for computing a set of intervals to be used in a forward-difference approximation of the gradient

Journal ArticleDOI
Mike Smith1
TL;DR: In this paper, the authors considered the assignment problem when there are junction interactions and gave an objective function which measures the extent to which a traffic distribution departs from equilibrium, and an algorithm which (under certain conditions) calculates equilibria by steadily reducing the objective function to zero.
Abstract: The paper considers the assignment problem when there are junction interactions. We give an objective function which measures the extent to which a traffic distribution departs from equilibrium, and an algorithm which (under certain conditions) calculates equilibria by steadily reducing the objective function to zero. It is shown that the algorithm certainly works if the network cost-flow function is monotone and continuously differentiable, and a boundary condition is satisfied.

Journal ArticleDOI
TL;DR: The interval estimation model proposed by Hicks, Miller, and Kinsbourne (1976) provided a better account of the data than did the storage-size hypothesis of Ornstein (1969).
Abstract: Undergraduate students performed one of three levels of processing on each word (15, 30, or 45) presented during a 120-sec interval. Subjects were told in advance that they would be required to estimate the length of the presentation interval (prospective condition) or were presented with an unexpected estimation task (retrospective condition). In the prospective condition, interval estimates were an inverse function of list length when relatively deep levels of processing were required, but were an increasing function of list length when shallow processing was required. In the retrospective condition, estimates were an increasing function of list length and were unaffected by different levels of processing. The interval estimation model proposed by Hicks, Miller, and Kinsbourne (1976) provided a better account of the data than did the storage-size hypothesis of Ornstein (1969).

Book
01 Jan 1983
TL;DR: This dissertation makes a formal connection between functional recursion and component connectivity that is pleasantly direct, suggesting that applicative notation is the appropriate basis for digital design, and suggests yet another way to implement "function recursion" with "data recursion."
Abstract: The discipline of applicative program design style is adapted to the design of synchronous systems. The result is a powerful and natural methodology for engineering correct hardware implementations. This dissertation makes a formal connection between functional recursion and component connectivity that is pleasantly direct, suggesting that applicative notation is the appropriate basis for digital design. Synthesis is a theory for constructing concretely descriptive realizations from abstractly descriptive specifications. Functional recursion equations serve here as specifications; the realization language uses signal equations to describe circuit connectivity. A semantics for realizations assigns an infinite value sequence to each signal. The register equation X = a ! t, where t is an applicative term, defines signal X to be the output a register initialized to a. Synchronous systems are characterized by iterative specifications over a single function symbol (simple while-loops). Moreover, the transcription from F(x(,1),..., x(,n)) p (--->) r, F(t(,1),..., t(,n)) to the canonical realization {X(,i) = a(,i) ! t(,i)} is immediate and correct. Realizations can be compiled from arbitrary iterative specifications by a familiar construction. In non-iterative cases synthesis of iterative form is the main tactic used here for deriving realizations. This subgoal formalizes the conventional technique of decomposing a circuit into an architecture and a finite state controller. This approach suggests yet another way to implement "function recursion" with "data recursion." An interpreter has been implemented for Daisy, a demand-driven list processing language. By representing signals as infinite lists, expressed realizations can be directly interpreted, resulting in a simulation of logical behavior. Thus, the engineering notation serves also as a vehicle for experimentation. A non-trivial exercise in language-driven design reveals that global design factorization techniques, such as hierarchical decomposition and data abstraction, are inherited from the functional description style. Next, a specialized transformation system is defined to address a typical local refinement problem: serialization of inputs.

Book ChapterDOI
TL;DR: The authors discusses statistical theories of model selection and traditional problems and the solution to these two problems that might be supposed to be essentially the same, in fact diverge in two important respects: (1) the first problem leads clearly to a significance level, that is a decreasing function of sample size, whereas (2) the second problem selects a relatively constant significance level.
Abstract: Publisher Summary This chapter discusses statistical theories of model selection and traditional problems. One important source of model-selection problems is the existence of a priori opinion that constraints are “likely.” Statistical testing is then designed either to determine if a set of constraints is “true” or to determine if a set of constraints is “approximately true.” The solution to these two problems that might be supposed to be essentially the same, in fact diverge in two important respects: (1) the first problem leads clearly to a significance level—that is a decreasing function of sample size, whereas (2) the second problem selects a relatively constant significance level. The problem has a set of alternative models—that is determined entirely from a priori knowledge, whereas the second problem can have a data-dependent set of hypotheses. Quadratic loss functions are discussed in the chapter both with and without fixed costs. Quadratic loss does not imply a model-selection problem. The chapter also discusses problems that are not well known.

Journal ArticleDOI
TL;DR: In this paper, the authors compare different definitions of the Wigner distribution with respect to aliasing and computational complexity and conclude that no definition leads to a function that is optimum in all respects.
Abstract: There is no straightforward way to proceed from the continuous-time Wigner distribution to a discrete-time version of this time-frequency signal representation. A previously given definition of such a function turned out to yield a distribution that was periodic with period π instead of 2π and this caused aliasing to occur. Various alternative definitions are considered and compared with respect to aliasing and computational complexity. From this comparison it appears that no definition leads to a function that is optimum in all respects. This is illustrated by an example.

Journal ArticleDOI
TL;DR: A new procedure for measuring compositional change along gradients, Gradient rescaling, and a new unit of beta diversity, the gleason, are proposed.
Abstract: A new procedure for measuring compositional change along gradients is proposed. Given a matrix of species-by-samples and an initial ordering of samples on an axis, the ‘gradient rescaling’ method calculates 1) gradient length (beta diversity), 2) rates of species turnover as a function of position on the gradient, and 3) an ecologically meaningful spacing of samples along the gradient. A new unit of beta diversity, the gleason, is proposed. Gradient rescaling is evaluated with both simulated and field data and is shown to perform well under many ecological conditions. Applications to the study of succession, phenology, and niche relations are briefly discussed.

Journal ArticleDOI
TL;DR: In this article, the probability of the occurrence of a run R is obtained as a function of the composition of R, the number n of trials, and the probabilities of the v ≥ 2 possible outcomes at each trial.
Abstract: The probability of the occurrence of a run R is obtained as a function of the composition of R, the number n of trials, and the probabilities of the v ≥ 2 possible outcomes at each trial. The run R can consist of any specified sequence of outcomes, and the probability that one or more of a given collection of runs occurs is also evaluated. The probabilities of the v possible outcomes can vary arbitrarily from trial to trial, and can be L-order Markov dependent on the L preceding outcomes. The practical application of these results is discussed.

Journal ArticleDOI
TL;DR: It is concluded that the mean-variance model can serve as a useful surrogate to at least one popular alternative investment strategy.
Abstract: In this paper, we investigate how closely functions of means and variances can approximate Von Neumann-Morgenstern expected utility modeled as a logarithmic utility-of-wealth function. Using historical security return data, we computed portfolios maximizing expected logarithmic utility and compared them to those maximizing appropriate mean-variance formulations. In all cases the approximations were very good, and in many cases the optimal portfolios were virtually identical. We conclude that the mean-variance model can serve as a useful surrogate to at least one popular alternative investment strategy.

PatentDOI
TL;DR: In this article, a new dialogue is post-synchronised with guide track dialogue by using signal processing apparatus in which the analog guide track signal x, (t) undergoes speech parameter measurement processing in a processor 43 to provide a speech parameter vector A (kT).
Abstract: New dialogue is post-synchronised with guide track dialogue by using signal processing apparatus in which the analog guide track signal x, (t) undergoes speech parameter measurement processing in a processor 43 to provide a speech parameter measurement processing in a processor 43 to provide a speech parameter vector A (kT). The new dialogue signal x 2 (t') is processed to give waveform data which can be stored on disc 25 and a speech parameter vector B (jT) from a parameter extraction processor 42. The variables k and 1 are data frame numbers, and T is an analysis interval. Some parameters of the vector B are used in a process 48 to classify successive passages of the new dialogue signal into speech and silence, to produce classification data f (jT). The vectors A and B and the classification data are utilized in a time warp processor SBC2 to determine a timewarping function w (kT) giving the values of i in terms of the values of k associated with the corresponding speech features, and thereby, indicating the amount of expansion or compression of the waveform data of the new dialogue signal needed to align the time-dependent features of the new dialogue signal with the corresponding features of the guide track signal. Editing instructions are generated in signal editor computer SBCI from the w (kT) data, feature classification data, pitch data p (jT) and the data stream x 2 (nD) so that the editing of x 2 (nD) can be carried out by the computer SBCI in which periods of silence or speech are lengthened or shortened to give the desired alignment. The edited data x 2 (nD) is converted to analog by a converter unit 29. and low pass filtered to provide an audio output signal to be recorded as the synchronised new dialogue.

Journal ArticleDOI
TL;DR: In this paper, it was proved that the function f reaches its maximum for n = 6 983 776 800, and that maxn≥2 f(n) = 1.5379.
Abstract: Let It is proved that the function f reaches its maximum for n = 6 983 776 800, and that maxn≥2 f(n) = 1.5379. The proof deals with superior highly composite numbers introduced by Ramanujan.