scispace - formally typeset
Search or ask a question

Showing papers on "Constant (mathematics) published in 1991"


Journal ArticleDOI
Mark N. Wegman1, F. Kenneth Zadeck1
TL;DR: Four algorithms, all conservitive in the sense that all constants may not be found, but each constant found is constant over all possible executions of the program, are presented.
Abstract: Constant propagation is a well-known global flow analysis problem. The goal of constant propagation is to discover values that are constant on all possible executions of a program and to propagate these constant values as far foward through the program as possible. Expressions whose operands are all constants can be evaluated at compile time and the results propagated further. Using the algorithms presented in this paper can produce smaller and faster compiled programs. The same algorithms can be used for other kinds of analyses (e.g., type of determination). We present four algorithms in this paper, all conservitive in the sense that all constants may not be found, but each constant found is constant over all possible executions of the program. These algorithms are among the simplest, fastest, and most powerful global constant propagation algorithms known. We also present a new algorithm that performs a form of interprocedural data flow analysis in which aliasing information is gathered in conjunction with constant progagation. Several variants of this algorithm are considered.

542 citations


Journal ArticleDOI
TL;DR: In this article, the impulse response function of an oscillator with a fractional derivative damping model is integrated with a new efficient technique using the inverse Fourier transform, this requires a unique definition of the constitutive equation in the frequency domain.

471 citations


Journal ArticleDOI
TL;DR: A novel new integral transform that is adapted for signals of this type is introduced and used to derive estimation and classification algorithms that are simple to implement and that exhibit good performance.
Abstract: The measurement of the parameters of complex signals with constant amplitude and polynomial phase, measured in additive noise, is considered. A novel new integral transform that is adapted for signals of this type is introduced. This transform is used to derive estimation and classification algorithms that are simple to implement and that exhibit good performance. The algorithms are extended to constant amplitude and continuous nonpolynomial phase signals. >

338 citations


Journal ArticleDOI
TL;DR: A unified approach is developed for the evolutionary structure of mammalian life histories; it blends together three basic components to predict both the powers and the intercepts of the scaling allometry of life history variables to adult size.
Abstract: A unified approach is developed for the evolutionary structure of mammalian life histories; it blends together three basic components (individual growth or production rate as a function of body size, natural selection on age of maturity, and stable demography) to predict both the powers and the intercepts of the scaling allometry of life history variables to adult size. The theory also predicts the signs (+, -) of the correlations between life history variables when body size is held constant. Finally, the approach allows us to eliminate body size to predict the dimensionless relationships between the life history variables themselves.

310 citations


Journal ArticleDOI
TL;DR: In this article, a new linear model of the capacitor was proposed, based on old concepts and results that have not been combined before, and it is shown that real capacitors have memory.
Abstract: The first part of this report deals with a new linear model of the capacitor. It is based entirely on old concepts and results that have not been combined before. The model current after a voltage step at t = 0 is proportional to t−n where n is a number less than but close to one. In the frequency domain losses are distributed to all frequencies. Both of these qualities adhere to real capacitors. The major result can be expressed in the integral equation q(t) = C0 u(t) + Cχ (dn-1u(t)/dtn-1) t > 0 which replaces the traditional equation q(t) = C u(t). Cχ is a constant and the time/frequency dependence lies in the fractional derivative, eq. (15). Application of these concepts imply neglibible changes to the majority of the results obtained hitherto by conventional methods but several effects that are not explicable by these methods follow from the model. The most important of these is the memory effect and that particular discussion ends in the conclusion that dielectrics have memory. The memory involves a weight factor, t−n, which makes it impossible to replace the memory of past voltages with a single initial value in order to predict the future behaviour of the dielectric. In the last part of the report it is found that the same model applies to the theory of elasticity and to many, if not all, other systems. The conclusion is that matter has memory. It appears that there is a common memory that can be influenced by many means, e.g., electric fields, mechanical stress, magnetic fields, temperature etc.

244 citations


Journal ArticleDOI
TL;DR: In this paper, simple characterizations of controllability, observability, of their duality and of a minimal realization of a Kalman state variable realization for generalized linear systems are proposed.

233 citations


Journal ArticleDOI
TL;DR: This work constructs a family of functions that convert a polylogarithmic number of truly random bits ton bits that appear random to any family of circuits of polynomial size and depthd, and shows that the complexity class AM is equal to the class of languages recognizable in NP with a random oracle.
Abstract: For every integerd we explicitly construct a family of functions (pseudo-random bit generators) that convert a polylogarithmic number of truly random bits ton bits that appear random to any family of circuits of polynomial size and depthd. The functions we construct are computable by a uniform family of circuits of polynomial size and constant depth. This allows us to simulate randomized constant depth polynomial size circuits inDSPACE(polylog) and inDTIME(2 polylog ). As a corollary we show that the complexity class AM is equal to the class of languages recognizable in NP with a random oracle. Our technique may be applied in order to get pseudo random generators for other complexity classes as well; a further paper [16] explores these issues.

226 citations


Journal ArticleDOI
TL;DR: In the proposed methods, one does not need to calculate the scale factor during the computation, and can make a more efficient sine and cosine generator than that based on the previous redundant CORDIC.
Abstract: Proposes two redundant CORDIC (coordinate rotation digital computer) methods with a constant scale factor for sine and cosine computation, called the double rotation method and the correcting rotation method. In both methods, the CORDIC is accelerated by the use of a redundant binary number representation, as in the previously proposed redundant CORDIC. In the proposed methods, since the number of rotation-extensions performed for each angle is a constant, the scale factor is a constant independent of the operand. Hence, one does not need to calculate the scale factor during the computation, and can make a more efficient sine and cosine generator than that based on the previous redundant CORDIC. >

221 citations


Proceedings ArticleDOI
11 Dec 1991
TL;DR: In this article, the authors consider four control-related problems, all of which involve reformulation into linear matrix inequalities (LMIs), and propose a partial theory for optimal performance in systems which depend on several independent variables.
Abstract: The authors consider four control-related problems, all of which involve reformulation into linear matrix inequalities (LMIs). The problems are: structured singular value ( mu ) upper bound synthesis for constant matrix problems; robust-state-feedback problem with quadratic stability criteria for uncertain systems; optimal, constant, block diagonal, similarity scaling for full information and state feedback H/sub infinity / problem; and a partial theory for optimal performance in systems which depend on several independent variables. >

216 citations


Proceedings ArticleDOI
11 Dec 1991
TL;DR: In this article, a method of compensating for friction in control systems is presented, which involves the use of an observer to estimate the friction which is modeled as a constant times the sign of velocity.
Abstract: A method of compensating for friction in control systems is presented. The method entails the use of an observer to estimate the friction which is modeled as a constant times the sign of velocity. The purpose of the observer is to estimate this constant. The observer model is selected to ensure that the error in estimation of the friction constant converges asymptotically to zero. Simulation results verify the theory and show that the method can significantly improve the performance of a control system in which it is used. Although based on the assumption of a constant friction magnitude, the observer displays the ability to track friction whose magnitude depends on velocity. >

189 citations


Journal ArticleDOI
TL;DR: It is rigorously established that the sequence of weight estimates can be approximated by a certain ordinary differential equation, in the sense of weak convergence of random processes as epsilon tends to zero.
Abstract: The behavior of neural network learning algorithms with a small, constant learning rate, epsilon , in stationary, random input environments is investigated. It is rigorously established that the sequence of weight estimates can be approximated by a certain ordinary differential equation, in the sense of weak convergence of random processes as epsilon tends to zero. As applications, backpropagation in feedforward architectures and some feature extraction algorithms are studied in more detail. >

Proceedings ArticleDOI
01 Sep 1991
TL;DR: It is demonstrated that randomization is an extremely powerful tool for designing very fast and efficient parallel algorithms and a running time of O(lg* n) (nearly-constant), with high probability, is achieved using n/lG* n (optimal speedup) processors for a wide range of fundamental problems.
Abstract: It is demonstrated that randomization is an extremely powerful tool for designing very fast and efficient parallel algorithms. Specifically, a running time of O(lg* n) (nearly-constant), with high probability, is achieved using n/lg* n (optimal speedup) processors for a wide range of fundamental problems. Also given is a constant time algorithm which, using n processors, approximates the sum of n positive numbers to within an error which is smaller than the sum by an order of magnitude. A variety of known and new techniques are used. New techniques, which are of independent interest, include estimation of the size of a set in constant time for several settings, and ways for deriving superfast optimal algorithms from superfast nonoptimal ones. >


Proceedings ArticleDOI
01 Sep 1991
TL;DR: It turns out that, for any constant depth d, polynomial size sigmoid threshold circuits with polynomially bounded weights compute exactly the same Boolean functions as the corresponding circuits with Boolean threshold gates.
Abstract: The power of constant depth circuits with sigmoid (i.e., smooth) threshold gates for computing Boolean functions is examined. It is shown that, for depth 2, constant size circuits of this type are strictly more powerful than constant size Boolean threshold circuits (i.e., circuits with Boolean threshold gates). On the other hand it turns out that, for any constant depth d, polynomial size sigmoid threshold circuits with polynomially bounded weights compute exactly the same Boolean functions as the corresponding circuits with Boolean threshold gates. >

Journal ArticleDOI
TL;DR: Analytic models developed and used in comparing fixed route conventional bus and flexible route subscription bus systems for providing feeder services to a single point and the relative advantage of subscription bus services generally increases with smaller service areas, higher speeds, lower fixed bus cost, lower incremental costs of vehicle size, higher values of access and wait time, and lower values of in-vehicle time.
Abstract: Analytic models are developed and used in comparing fixed route conventional bus and flexible route subscription bus systems for providing feeder services to a single point (such as a transportation terminal). In the mathematical models for these two systems, vehicle size and service zone size are the optimizable decision variables, and total system cost, including operator and user costs, is the objective function to be minimized. Average cost per trip is the criterion used to determine which system is preferable and how it should be optimized in various circumstances. With an approximation for subscription services, closed form solutions for optimal vehicle size and service zone area are obtained for both alternatives. The analytic results for optimized conventional services indicate that constant ratios should be maintained at all demand densities between vehicle size and route spacing and between fleet size and user wait time. Analytic solutions are extended to conditions where (1) demand and costs ar...

Journal ArticleDOI
TL;DR: The galvanotaxis response of neural crest cells that had migrated out of the neural tube of a 56-hr-old quail embryo onto glass coverslips was observed using time-lapse video microscopy and a rather simple mathematical equation is generated that fits these data.
Abstract: The galvanotaxis response of neural crest cells that had migrated out of the neural tube of a 56-hr-old quail embryo onto glass coverslips was observed using time-lapse video microscopy. These cells exhibit a track velocity of about 7 microns/min and actively translocate toward the negative pole of an imposed DC electric field. This nonrandom migration could be detected for fields as low as 7 mV/mm (0.4 mV/cell length). We find that this directional migration is independent of the speed of migration and have generated a rather simple mathematical equation that fits these data. We find that the number of cells that translocate at a given angle, phi, with respect to the field is given by the equation N(phi) = exp(a0 + a1cos phi), where a1 is linearly proportional to the electric field strength for fields less than 390 mV/mm with a constant of proportionality equal to KG, the galvanotaxis constant. We show that KG = (150 mV/mm)-1, and at this field strength the cellular response is approximately half maximal. This approach to cellular translocation data analysis is generalizable to other directed movements such as chemotaxis and allows the direct comparison of different types of directed movements This analysis requires that the response of every cell, rather than averages of cellular responses, is reported. Once an equation for N(phi) is derived, several characteristics of the cellular response can be determined. Specifically, we describe 1) the critical field strength (390 mV/mm) below which the cellular response exhibits a simple, linear dependence on field strength (for larger field strengths, an inhibitory constant can be used to fit the data, suggesting that larger field strengths influence a second cellular target that inhibits the first); and 2) the amount of information the cell must obtain in order to generate the observed asymmetry in the translocation distribution (for a field strength of 100 mV/mm, 0.3 bits of information is required).

Posted Content
TL;DR: In this article, it is shown that the appropriate indicator of expected inflation can make use of the entire length of the yield curve, in particular by estimating the steepness of a specific nonlinear transformation of the curve, rather than being restricted to a spread between two points.
Abstract: It is often suggested that the slope of the term structure of interest rates contains information about the expected future path of inflation. Mishkin (1990) has recently shown that the spread between the 12-month and 3-month interest rates helps to predict the difference between the 12-month and 3-month inflation rates. His approach however, lacks a theoretical foundation, other than the (rejected) hypothesis that the real interest rate is constant. This paper applies a simple existing theoretical framework, which allows the real interest rate to vary in the short run but converge to a constant in the long run, to the problem of predicting the inflation spread. It is shown that the appropriate indicator of expected inflation can make use of the entire length of the yield curve, in particular by estimating the steepness of a specific nonlinear transformation of the curve, rather than being restricted to a spread between two points. The resulting indicator, besides having a firmer theoretical foundation does a relatively good job of predicting the inflation rate over the period 1960 to 1988.

Journal ArticleDOI
TL;DR: In this article, it was shown that there exists a unique solution to the time-harmonic Maxwell equations in R 3 having the form of refracted waves for x 3 > 1 and of transmitted waves for -X 3 >> 1 if and only if there exists an exact solution to a certain system of two coupled Fredholm equations.
Abstract: Consider a diffraction of a beam of particles in R3 when the dielectric coefficient is a constant cl above a surface S and a constant e2 below a surface S, and the magnetic permeability is constant throughout RIi. S is assumed to be periodic in the xl direction and of the form x1 = f1(s), X3 = f3(s), x2 arbitrary. We prove that there exists a unique solution to the time-harmonic Maxwell equations in R 3 having the form of refracted waves for x3 > 1 and of transmitted waves for -X3 >> 1 if and only if there exists a unique solution to a certain system of two coupled Fredholm equations. Thus, in particular, for all the E 's, except for a discrete number, there exists a unique solution to the Maxwell equations. INTRODUCTION In this paper we consider the Maxwell equations for time harmonic solutions in the entire space R3 with piecewise constant dielectric coefficient having jump across a periodic surface. The magnetic permeability ,u is assumed to be constant whereas the dielectric coefficient e is given by e = 8, above a surface S: x3 = f(xl) and e = e2 below the surface S; e, and 82 are different constants. If S is a half-space {x3 = 0} then the solution Eo Ho can be computed explicitly. We assume in this paper that S is periodic, i.e., f(xl + L) =f(xl) VX1 E R (L > 0). We wish to find a solution E, H such that E Eo and H Ho are superpositions of "transmitted' waves (0.1) in {x3 A} where A >maxlfl. In ?? 1-7 we assume that f E C2 and we reduce the solution of the Maxwell equations to a Fredholm system of four integral equations; in ?8 we reduce it further to a Fredholm system of two integral equations. Thus for all but a discrete sequence of values of the physical parameters there exists a unique solution to the integral equations, yielding a solution of the Maxwell equations; Received by the editors January 25, 1989. 1980 Mathematics Subject Classification (1985 Revision). Primary 78A10, 78A45; Secondary 35P25, 47A40.

Journal ArticleDOI
TL;DR: In this article, the authors studied the problem of determining functions that can be recovered, up to an additive constant, from the knowledge of their subgradients, which is not very well understood.
Abstract: IN NONSMOOTH analysis and optimization, subgradients come in many different flavors, e.g. approximate, Dini, proximal, (Clarke) generalized; see [2, 5, 6, 12, 14, 191. These subgradients are important and valuable tools. However, many questions remain unsolved concerning the exact link between the function and its subgradients. For instance, can two functions, not differing by an additive constant, have the same subgradients? In this paper, we study the fundamental problem of determining functions that can be recovered, up to an additive constant, from the knowledge of their subgradients. This “integration” problem is not very well understood, and very few functions or classes of functions are known to be recoverable from their subgradients. In Section 4, we show that if the “basic constraint qualification” holds at R, then the composition of a closed (i.e. lowersemicontinuous) proper convex function with a twice continuously differentiable mapping is determined up to an additive constant by its generalized subgradients (actually in this case all above-mentioned subgradients are the same). Beside the obvious theoretical interest of this integration problem, it is our hope (or perhaps our long-term goal) that once this problem is better understood, we can then tackle the question of uniqueness of solutions to generalized differential equations involving subgradients in place of partial derivatives. An example of such an equation that well deserves study is the extended Hamilton-Jacobi equation used in optimal control; see Clarke [2]. Let us also mention that a problem similar to the integration problem is the one of determining the set-valued mappings that are in fact subgradient set-valued mappings (uniqueness is not mandatory); for a contribution to this problem see Janin [8]. Before we look at some of the known cases, where the function can be recovered from its subgradients, let us look at some negative examples. It is clear that not every function can be recovered, up to an additive constant, from its subgradients. We only need to look at the following two functions: 0 x50 0 x10 f(x) = 1 x>o g(x) = 2 x>o.

Journal ArticleDOI
TL;DR: Wirth’s proposal for type extensions includes an algorithm for determining whether a given value belongs to an extension of a given type, in the worst case, which takes time proportional to the depth of the type-extension hierarchy.
Abstract: Wirth’s proposal for type extensions includes an algorithm for determining whether a given value belongs to an extension of a given type, In the worst case, this algorithm takes time proportional to the depth of the type-extension hierarchy. Wirth describes the loop in this algorithm as “unavoidable,” but in fact, the test can be performed in constant time by associating a “display” of base types with each type descriptor,

Journal ArticleDOI
TL;DR: In this article, it was shown that the error of polynomial interpolation in n equally spaced points can be reduced to 0 as n → ∞ if and only if α is small enough to provide at least six points per wavelength.

Journal ArticleDOI
TL;DR: In this paper, color Doppler flow mapping was used to image the flow in an in vitro left heart pulse duplicator system, and the effect of changing one instrument setting, such as pulse repetition frequency, color filter and frame rate while keeping all other instrumental settings constant, was noted on the displayed flow areas.
Abstract: To evaluate the factors influencing flow information displayed by the power operating mode, color Doppler flow mapping was used to image the flow in an in vitro left heart pulse duplicator system. The effect of changing one instrument setting, such as pulse repetition frequency, color filter and frame rate while keeping all other instrumental settings constant, was noted on the displayed flow areas. The flow areas derived using power and velocity modes were also compared. An increase in pulse repetition frequency and color filter decreased the flow areas significantly, and a flow area increase occurred with a decrease in frame rate. No significant difference was observed between the flow areas derived using the two operating modes. Like the velocity mode, the power mode display is also influenced by instrument settings. Although low velocity flows are better delineated using this mode, however, no significant difference occurred in the flow areas measured by this mode and velocity mode. Further studies need to be conducted to address its potential applications in the clinical setting and in quantitation.

Journal ArticleDOI
TL;DR: In this article, a subset of the states that satisfy the uncertainty relation as an equality are constructed by applying a squeeze operator to a state that consists of a Hermite polynomial, whose argument is the mode creation operator multiplied by a constant, acting on the vacuum.
Abstract: The real and imaginary parts of the square of the field amplitude are the variables that describe amplitude-squared squeezing. These quantities obey an uncertainty relation. Here we find a particularly simple subset of the states that satisfy the uncertainty relation as an equality. These states are constructed by applying a squeeze operator to a state that consists of a Hermite polynomial, whose argument is the mode creation operator multiplied by a constant, acting on the vacuum. The squeezed vacuum is such a state. These states may or may not be squeezed in the normal sense, and may or may not have sub-Poissonian photon statistics.

Journal Article
TL;DR: In this paper, a control law based on the inversion of an input-output map is obtained for a two-link flexible robotic arm and a linear stabilizer is designed for final capture of the terminal state and stabilization of the elastic modes.
Abstract: The question of control of the end effector trajectory and stabilization of a two-link flexible robotic arm is considered. A control law based on the inversion of an input-output map is obtained. The outputs are chosen as the sum of the joint angle and tip elastic deformation times a constant factor for each link. The stable maneuver of the arm critically depends on the stability of the zero dynamics of the system. Stability of the zero dynamics is shown to be sensitive to the choice of the constant multiplying factor, which explains the difficulty in controlling the tip position. A critical value of the constant factor for control is obtained and this corresponds to a coordinate in the neighbourhood of the actual tip position. Although the inverse controller accomplishes output control, this excites the rigid and elastic modes. A linear stabilizer is designed for final capture of the terminal state and stabilization of the elastic modes. Simulation results are presented to show that in the closed-loop system, large maneuvers can be performed in the presence of payload uncertainty.


Journal ArticleDOI
Jan Beran1
TL;DR: In this paper, the behavior of M estimators of the location parameter for stochastic processes with long-range dependence was investigated, and it was shown that up to a constant, all estimators are asymptotically equivalent to the arithmetic mean.
Abstract: We investigate the behavior of M estimators of the location parameter for stochastic processes with long-range dependence. The processes considered are Gaussian or one-dimensional transformations of Gaussian processes. It turns out that, up to a constant, all M estimators are asymptotically equivalent to the arithmetic mean. For Gaussian processes this constant is always equal to one, independently of the ψ function. In view of the case of iid observations, the results are surprising. They are related to earlier work by Gastwirth and Rubin. Some simulations illustrate the results.

Journal ArticleDOI
TL;DR: In this paper, the authors extended the results of Malcomson on the optimal replacement problem in vintage models and provided numerical evidence on the fact that the optimal machine life is constant under certain conditions.

Journal ArticleDOI
TL;DR: In this paper, the authors obtained explicit closed-form expressions for the one-loop effective action in a constant electromagnetic field and discussed both massive and massless charged scalars and spinors in two, three and four dimensions.
Abstract: Motivated by the seminal work of Schwinger, we obtain explicit closed-form expressions for the one-loop effective action in a constant electromagnetic field. We discuss both massive and massless charged scalars and spinors in two, three and four dimensions. Both strong-field and weak-field limits are calculable. The latter limit results in an asymptotic expansion whose first term reproduces the Euler-Heinsenberg effective Lagrangian. We use the prescription of zeta-function renormalization, and indicate its relationship to Schwinger’s renormalized effective action.

Journal ArticleDOI
TL;DR: This paper attempts to demonstrate, for a limited sample of highly fecund marine organisms, that a simple, two-parameter reciprocal function of age is a more reasonable, and mathematically parsimonious, description of the form of the life history M-vector between egg and larval stages and the mean parental age.
Abstract: The ‘constant M axiom’ refers to the often automatic assumption made in fish stock assessment, that the instantaneous natural mortality rate, expressed on an annual basis, is constant throughout the exploited phase of the life history. This uncritical assumption or axiom has come under criticism in recent years, best documented by field determinations of predation rates on juvenile fish coordinated in the North Sea through the International Council for the Exploration of the Sea (ICES). These, and earlier studies, have yielded empirical vectors of predation-related M for juveniles of key species, and in some cases, also for egg and larval stages. To date, perhaps the mathematical function with the widest acceptance for modelling M-at-age in the animal kingdom has been the Weibull distribution. This paper attempts to demonstrate, for a limited sample of highly fecund marine organisms, that a simple, two-parameter reciprocal function of age is a more reasonable, and mathematically parsimonious, description of the form of the life history M-vector between egg and larval stages and the mean parental age (MPA). Mortality rates may increase later in life, especially for semelparous species, and in this case would need to be described by an additional or more complex function, beyond the scope of this paper. Seasonal and inter-annual variations in M-at-age, though rarely documented, also occur; in the latter case, when associated with density dependence, they give rise to the stock-recruit relationships described in the literature, and lead to annual divergences from mean expectation of death, especially for the earliest life-history stages. Contrary to preconception, however, there seems no overwhelming evidence from egg and larval studies for the consistent action of a critical period: the very high egg and larval M-at-age values predicted by the reciprocal function, though continuous in time, effectively correspond to catastrophic early mortality rates, but provide a mean expectation of death with age that converges smoothly on a constant M value at greater ages. The use of any steeply declining mortality function of age, in simulating survivorship over a time period T between spawning and maturity of the succeeding generation, reveals a serious difficulty in defining appropriate time units. It is concluded that if the reciprocal mortality function applies, then simulating constant survivors at time T independently of the number of smaller time units (Δt) that T is divided into, is effectively accomplished by using progressively longer time intervals with age t, such that Δt/t is constant for all intervals. This is referred to as ‘proportional time allocation’. Calculations assuming population stability, and using indicative values for mean lifetime fecundity (MLF) and mean parental age (MPA), are presented for a generalized demersal and a generalized small pelagic fish, employing the reciprocal function with proportional time allocation. Sets of values for the two parameters of the reciprocal function were not rejected, even though non-unique, if they reduced the MLF to two individuals by MPA, and a unique vector of M-at-age presumably could be determined if the ‘constant adult M’ is already known. The new model has the advantage of being compatible with the ‘constant adult M’ hypothesis over periods of roughly 4–5 and 1+ years, respectively, for generalized northern demersals and small pelagics, over the ages just preceding MPA. The possibility of extension of this model to length-based methods, and its analogies with crustacean moulting models, are discussed.