scispace - formally typeset
Search or ask a question

Showing papers on "Constant (mathematics) published in 2005"


Journal ArticleDOI
TL;DR: In this paper, the authors consider the Cauchy problem for a strictly hyperbolic, n × n system in one-space dimension, and show that the solutions of the viscous approximations ut + A(u)ux = euxx are defined globally in time and satisfy uniform BV estimates, independent of e.
Abstract: We consider the Cauchy problem for a strictly hyperbolic, n × n system in one-space dimension: ut + A(u)ux = 0, assuming that the initial data have small total variation. We show that the solutions of the viscous approximations ut + A(u)ux = euxx are defined globally in time and satisfy uniform BV estimates, independent of e. Moreover, they depend continuously on the initial data in the L 1 distance, with a Lipschitz constant independent of t, e. Letting e → 0, these viscous solutions converge to a unique limit, depending Lipschitz continuously on the initial data. In the conservative case where A = Df is the Jacobian

437 citations


Proceedings ArticleDOI
06 Jun 2005
TL;DR: A near linear time algorithm for constructing hierarchical nets in finite metric spaces with constant doubling dimension is presented and this data-structure is applied to obtain improved algorithms for the following problems: Approximate nearest neighbor search, well-separated pair decomposition, spanner construction, compact representation scheme, doubling measure, and computation of the Lipschitz constant of a function.
Abstract: We present a near linear time algorithm for constructing hierarchical nets in finite metric spaces with constant doubling dimension. This data-structure is then applied to obtain improved algorithms for the following problems: Approximate nearest neighbor search, well-separated pair decomposition, spanner construction, compact representation scheme, doubling measure, and computation of the (approximate) Lipschitz constant of a function. In all cases, the running (preprocessing) time is near-linear and the space being used is linear.

233 citations


Journal ArticleDOI
TL;DR: It is proved that if it is a-priori known that the conductivity is piecewise constant with a bounded number of unknown values, then a Lipschitz stability estimate holds.

231 citations


Journal ArticleDOI
TL;DR: Improving the understanding of molecular evolution will be an important next step towards evaluating and improving molecular dating methods.
Abstract: Molecular-dating techniques potentially enable us to estimate the time of origin of any biological lineage. Such techniques were originally premised on the assumption of a 'molecular clock'; that is, the assumption that genetic change accumulated steadily over time. However, it is becoming increasingly clear that constant rates of molecular evolution might be the exception rather than the rule. Recently, new methods have appeared that enable the incorporation of variable rates into molecular dating. Direct comparisons between these methods are difficult, because they differ in so many respects. However, the assumptions about rate change on which they rely fall into a few broad categories. Improving our understanding of molecular evolution will be an important next step towards evaluating and improving these methods.

221 citations


Patent
09 Nov 2005
TL;DR: In this article, one or more microprocessors are programmed to execute methods for improving the performance of an analyte monitoring device including prediction of glucose levels in a subject by utilizing a predicted slower-time constant (1/k 2 ).
Abstract: The present invention comprises one or more microprocessors programmed to execute methods for improving the performance of an analyte monitoring device including prediction of glucose levels in a subject by utilizing a predicted slower-time constant (1/k 2 ). In another aspect of the invention, pre-exponential terms (1/c 2 ) can be used to provide a correction for signal decay (e.g., a Gain Factor). In other aspects, the present invention relates to one or more microprocessors comprising programming to control execution of (i) methods for conditional screening of data points to reduce skipped measurements, (ii) methods for qualifying interpolated/extrapolated analyte measurement values, (iii) various integration methods to obtain maximum integrals of analyte-related signals, as well as analyte monitoring devices comprising such microprocessors. Further, the present invention relates to algorithms for improved optimization of parameters for use in prediction models that require optimization of adjustable parameters.

216 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered the one-dimensional dynamic response of an infinite bar composed of a linear "microelastic material" and examined the effects of long-range forces.
Abstract: The one-dimensional dynamic response of an infinite bar composed of a linear “microelastic material” is examined. The principal physical characteristic of this constitutive model is that it accounts for the effects of long-range forces. The general theory that describes our setting, including the accompanying equation of motion, was developed independently by Kunin (Elastic Media with Microstructure I, 1982), Rogula (Nonlocal Theory of Material Media, 1982) and Silling (J. Mech. Phys. Solids 48 (2000) 175), and is called the peridynamic theory. The general initial-value problem is solved and the motion is found to be dispersive as a consequence of the long-range forces. The result converges, in the limit of short-range forces, to the classical result for a linearly elastic medium. Explicit solutions in elementary form are given in a broad class of special cases. The most striking observations arise in the Riemann-like problem corresponding to a constant initial displacement field and a piecewise constant initial velocity field. Even though, initially, the displacement field is continuous, it involves a jump discontinuity for all later times, the Lagrangian location of which remains stationary. For some materials the magnitude of the discontinuity-jump oscillates about an average value, while for others it grows monotonically, presumably fracturing the material when it exceeds some critical level.

215 citations


Proceedings ArticleDOI
12 Dec 2005
TL;DR: Both constant and time-varying delays are considered, as well as uniform and non uniform repartitions of the delays in the network, providing sufficient conditions for existence of average consensus under bounded, but otherwise unknown, communication delays.
Abstract: The present paper is devoted to the study of average consensus problems for undirected networks of dynamic agents having communication delays. The accent is put here on the study of the time-delays influence: both constant and time-varying delays are considered, as well as uniform and non uniform repartitions of the delays in the network. The main results provide sufficient conditions (also necessary in most cases) for existence of average consensus under bounded, but otherwise unknown, communication delays. Simulations are provided that show adequation with these results.

208 citations


Journal ArticleDOI
TL;DR: This paper presents a new fully distributed approximation algorithm based on LP relaxation techniques which achieves a non-trivial approximation ratio in a constant number of rounds.
Abstract: Finding a small dominating set is one of the most fundamental problems of classical graph theory. In this paper, we present a new fully distributed approximation algorithm based on LP relaxation techniques. For an arbitrary, possibly constant parameter k and maximum node degree Δ, our algorithm computes a dominating set of expected size O(kΔ2/k log (Δ)|DSOPT|) in O (K2) rounds. Each node has to send O(k2 Δ) messages of size O(log Δ). This is the first algorithm which achieves a non-trivial approximation ratio in a constant number of rounds.

186 citations


Journal ArticleDOI
TL;DR: The structure of stochastic dynamics near either a stable or unstable fixed point, where the force can be approximated by linearization, is analyzed and a cost function that determines a Boltzmann-like stationary distribution can always be defined near it.
Abstract: We analyze the structure of stochastic dynamics near either a stable or unstable fixed point, where the force can be approximated by linearization. We find that a cost function that determines a Boltzmann-like stationary distribution can always be defined near it. Such a stationary distribution does not need to satisfy the usual detailed balance condition but might have instead a divergence-free probability current. In the linear case, the force can be split into two parts, one of which gives detailed balance with the diffusive motion, whereas the other induces cyclic motion on surfaces of constant cost function. By using the Jordan transformation for the force matrix, we find an explicit construction of the cost function. We discuss singularities of the transformation and their consequences for the stationary distribution. This Boltzmann-like distribution may be not unique, and nonlinear effects and boundary conditions may change the distribution and induce additional currents even in the neighborhood of a fixed point.

185 citations


Journal ArticleDOI
TL;DR: A heuristic is developed specifying that temperatures in replica-exchange simulations should be spaced such that about 20% of the phase-swap attempts are accepted, finding the result to be independent of the heat capacity.
Abstract: A heuristic is developed specifying that temperatures in replica-exchange simulations should be spaced such that about 20% of the phase-swap attempts are accepted. The result is found to be independent of the heat capacity, suggesting that it may be applied generally despite being based on an assumption of (piecewise-) constant heat capacity.

181 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss the role of the speed of light in many physics equations and discuss the requirements for attaining consistency of the resulting equations, when what was previously a constant is made a dynamical variable.
Abstract: Theories for a varying speed of light have been proposed as an alternative way of solving several standard cosmological problems. Recent observational hints that the fine structure constant may have varied over cosmological scales have given impetus to these theories. However, the speed of light is hidden in many physics equations and plays different roles in them. We discuss these roles to shed light on proposals for varying speed of light theories. We also emphasize the requirements for attaining consistency of the resulting equations, when what was previously a constant is made a dynamical variable.

Journal ArticleDOI
TL;DR: In this article, a feasible operating area for a solid oxide fuel-cell power plant is introduced by establishing the relationship between the stack terminal voltage, fuel utilization, and stack current.
Abstract: The concept of a feasible operating area for a solid oxide fuel-cell power plant is introduced by establishing the relationship between the stack terminal voltage, fuel utilization, and stack current. The analysis shows that both the terminal voltage and the utilization factor cannot be kept constant simultaneously when the stack current changes. This leads to the two possible control strategies as constant utilization control and constant voltage control. By controlling the input hydrogen fuel in proportion to the stack current, constant utilization control can be accomplished. By incorporating an additional external voltage-control loop, stack terminal voltage can be maintained constant. The detailed design of the control schemes is described. The effectiveness of the proposed schemes is illustrated through simulation. Using the numerical results, the maximum value of load power change that the plant can handle safely is predicted.

01 Jan 2005
TL;DR: In this paper, factor loadings k are parameters to be estimated that tap how the unobserved factors account for the observed variables: the larger the values of k, the more a particular variable is said to "load" on the corresponding factor.
Abstract: The factor loadings k are parameters to be estimated that tap how the unobserved factors account for the observed variables: the larger the values of k, the more a particular variable is said to ‘‘load’’ on the corresponding factor. Note that the factor loadings k vary across survey items, but not across individuals. Put differently, items vary in the way they are explained by the underlying factors, but the relationships between underlying factors and observed responses is constant across individuals (hence the absence of an i subscript indexing k). Note also that there are fewer underlying factors than there are variables (p < k), consistent with the notion that like any statistical procedure, factor analysis is a device for ‘‘data reduction’’, taking a possibly rich though unwieldy set of survey responses and summarizing them with a simpler underlying structure.

Journal ArticleDOI
TL;DR: It is proved that, for every family F of n semi-algebraic sets in Rd of constant description complexity, there exist a positive constant e that depends on the maximum complexity of the elements of F, and two subfamilies F1, F2 ⊆ F with at least en elements each.

Journal ArticleDOI
TL;DR: It is shown that the CPLD condition implies the quasinormality constraint qualification, but that the reciprocal is not true and relations with other constraint qualifications are given.
Abstract: The constant positive linear dependence (CPLD) condition for feasible points of nonlinear programming problems was introduced by Qi and Wei (Ref. 1) and used in the analysis of SQP methods. In that paper, the authors conjectured that the CPLD could be a constraint qualification. This conjecture is proven in the present paper. Moreover, it is shown that the CPLD condition implies the quasinormality constraint qualification, but that the reciprocal is not true. Relations with other constraint qualifications are given.

Proceedings ArticleDOI
17 Jul 2005
TL;DR: It is shown that on the widely used unit disk graph, covering and packing linear programs can be approximated by constant factors in constant time and results in asymptotically optimal O(log*!n) time algorithms for many important problems.
Abstract: Many large-scale networks such as ad hoc and sensor networks, peer-to-peer networks, or the Internet have the property that the number of independent nodes does not grow arbitrarily when looking at neighborhoods of increasing size. Due to this bounded "volume growth," one could expect that distributed algorithms are able to solve many problems more efficiently than on general graphs. The goal of this paper is to help understanding the distributed complexity of problems on "bounded growth" graphs. We show that on the widely used unit disk graph, covering and packing linear programs can be approximated by constant factors in constant time. For a more general network model which is based on the assumption that nodes are in a metric space of constant doubling dimension, we show that in O(log*!n) rounds it is possible to construct a (O(1), O(1))-network decomposition. This results in asymptotically optimal O(log*!n) time algorithms for many important problems.

Journal ArticleDOI
TL;DR: By using the Markov parameters, it is shown in the time-domain that there exists a non-increasing function such that when the properly chosen constant learning gain is multiplied by this function, the convergence of the tracking error norms is monotonic, without resort to high-gain feedback.

Journal ArticleDOI
TL;DR: The complexity of constructing pseudorandom generators (PRGs) from hard functions is studied, and it is proved that starting from a worst-case hard function, there is no blackbox construction of a PRG computable by constant-depth circuits of size polynomial in n.
Abstract: We study the complexity of constructing pseudorandom generators (PRGs) from hard functions, focussing on constant-depth circuits. We show that, starting from a function f : {0,1}l --> {0,1} computable in alternating time O(l) with O(1) alternations that is hard on average (i.e. there is a constant e > 0 such that every circuit of size 2el failes to compute f on at elast a 1/poly(l) fraction of inputs) we can construct a PRG : {0,1}O(log n) --> {0,1}n computable be DLOGTIME-uniform constant deptg circuits of size polunomial in n. Such a PRG implies BP . ACO = ACO under DLOGTIME-uniformity. On the negative side, we prove that starting from a worst-case hard function f : {0,1}l --> {0,1} (i.e. there is a constant e > 0 such that every circuit size of 2el fails to compute f on some input) for evey positive constant &$948; {0,1}n computable by constant-depth circuits of size polynomial in n. We also study worst-case hardness amplification, which is the related problem of producing an average-case hard function starting from a worst-case hard one. In particular, we deduce that there is no blackbox worst-case hardness amplification within the polynomial time hierarchy. These negative results are obtained by showing that polynomialsize constant-depth circuits cannot compute good extractors and listdecodable codes.

Journal ArticleDOI
TL;DR: In this brief, many novel theorems and corollaries are presented regarding the global asymptotic stability and global exponential stability of cellular neural networks with constant and variable time delays.
Abstract: In this brief, many novel theorems and corollaries are presented regarding the global asymptotic stability and global exponential stability of cellular neural networks with constant and variable time delays. The stability conditions in the new results improve and generalize existing ones. Several examples are discussed to compare the new results with the existing ones.

Posted Content
TL;DR: In this article, a multivariate GARCH model with time-varying conditional correlation structure is proposed, which is based on the decomposition of the covariances into correlations and standard deviations.
Abstract: In this paper we propose a new multivariate GARCH model with time-varying conditional correlation structure. The approach adopted here is based on the decomposition of the covariances into correlations and standard deviations. The time-varying conditional correlations change smoothly between two extreme states of constant correlations according to an endogenous or exogenous transition variable. An LM test is derived to test the constancy of correlations and LM and Wald tests to test the hypothesis of partially constant correlations. Analytical expressions for the test statistics and the required derivatives are provided to make computations feasible. An empirical example based on daily return series of five frequently traded stocks in the Standard & Poor 500 stock index completes the paper. The model is estimated for the full five-dimensional system as well as several subsystems and the results discussed in detail.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the unbounded fan-out gate is very powerful and can approximate with polynomially small error the follow- ing gates: parity, mod(q), And, Or, majority, threshold(t), exact, and counting.
Abstract: We demonstrate that the unbounded fan-out gate is very powerful. Constant- depth polynomial-size quantum circuits with bounded fan-in and unbounded fan-out over a fixed basis (denoted by QNC 0 ) can approximate with polynomially small error the follow- ing gates: parity, mod(q), And, Or, majority, threshold(t), exact(t), and Counting. Classi- cally, we need logarithmic depth even if we can use unbounded fan-in gates. If we allow arbitrary one-qubit gates instead of a fixed basis, then these circuits can also be made exact in log-star depth. Sorting, arithmetic operations, phase estimation, and the quantum Fourier transform with arbitrary moduli can also be approximated in constant depth.

Journal ArticleDOI
Qihe Tang1
TL;DR: In this article, the authors established a simple asymptotic formula for the finite-time ruin probability of the compound Poisson model with constant interest force and subexponential claims in the case that the initial surplus is large.
Abstract: In this paper, we establish a simple asymptotic formula for the finite-time ruin probability of the compound Poisson model with constant interest force and subexponential claims in the case that the initial surplus is large. The formula is consistent with known results for the ultimate ruin probability and, in particular, is uniform for all time horizons when the claim size distribution is regularly varying tailed.

Journal ArticleDOI
Rainer Schuler1
TL;DR: The satisfiability problem on Boolean formulas in conjunctive normal form is considered and it is shown that a satisfying assignment of a formula can be found in polynomial time with a success probability of 2-n(1-1/(1+logm)), where n and m are the number of variables and the numberof clauses of the formula, respectively.

Journal ArticleDOI
08 Dec 2005
TL;DR: This paper seeks general-purpose algorithms for the efficient approximation of trade-off curves using as few points as possible and presents a general algorithm that efficiently computes an e-Pareto curve that uses at most 3 times the number of points of the smallest such curve.
Abstract: Trade-off (aka Pareto) curves are typically used to represent the trade-off among different objectives in multiobjective optimization problems. Although trade-off curves are exponentially large for typical combinatorial optimization problems (and infinite for continuous problems), it was observed in Papadimitriou and Yannakakis [On the approximability of trade-offs and optimal access of web sources, in: Proc. 41st IEEE Symp. on Foundations of Computer Science, 2000] that there exist polynomial size e approximations for any e > 0, and that under certain general conditions, such approximate e-Pareto curves can be constructed in polynomial time. In this paper we seek general-purpose algorithms for the efficient approximation of trade-off curves using as few points as possible. In the case of two objectives, we present a general algorithm that efficiently computes an e-Pareto curve that uses at most 3 times the number of points of the smallest such curve; we show that no algorithm can be better than 3-competitive in this setting. If we relax e to any e' > e, then we can efficiently construct an e'-curve that uses no more points than the smallest e-curve. With three objectives we show that no algorithm can be c-competitive for any constant c unless it is allowed to use a larger e value. We present an algorithm that is 4-competitive for any e' > (1 + e)2 - 1. We explore the problem in high dimensions and give hardness proofs showing that (unless P=NP) no constant approximation factor can be achieved efficiently even if we relax e by an arbitrary constant.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the propagation of a plane electromagnetic wave in a medium with a piecewise constant axion field and showed that the reflection and transmission of a wave at an interface between the two media is sensitive to the difference of the axion values.

Journal ArticleDOI
TL;DR: The first polynomial time-space lower bounds for satisfiability on general models of computation are established, showing that for any constant c less than the golden ratio there exists a positive constant d such that no deterministic random-access Turing machine can solve satisfiability in time n and space d.
Abstract: We establish the first polynomial time-space lower bounds for satisfiability on general models of computation. We show that for any constant c less than the golden ratio there exists a positive constant d such that no deterministic random-access Turing machine can solve satisfiability in time nc and space nd, where d approaches 1 when c does. On conondeterministic instead of deterministic machines, we prove the same for any constant c less than √2.Our lower bounds apply to nondeterministic linear time and almost all natural NP-complete problems known. In fact, they even apply to the class of languages that can be solved on a nondeterministic machine in linear time and space n1/c.Our proofs follow the paradigm of indirect diagonalization. We also use that paradigm to prove time-space lower bounds for languages higher up in the polynomial-time hierarchy.

Journal ArticleDOI
TL;DR: This paper focuses on the automatic generation of circuits that involve constant matrix multiplication, i.e., multiplication of a vector by a constant matrix, and proposes a method based on number recoding and dedicated common subexpression factorization algorithms for this purpose.
Abstract: This paper presents some improvements on the optimization of hardware multiplication by constant matrices. We focus on the automatic generation of circuits that involve constant matrix multiplication, i.e., multiplication of a vector by a constant matrix. The proposed method, based on number recoding and dedicated common subexpression factorization algorithms, was implemented in a VHDL generator. Our algorithms and generator have been extended to the case of some digital filters based on multiplication by a constant matrix and delay operations. The obtained results on several applications have been implemented on FPGAs and compared to previous solutions. Up to 40 percent area and speed savings are achieved.

Journal ArticleDOI
TL;DR: In this paper, a local gluing construction for general relativistic initial data sets is presented, where the trace of the extrinsic curvature is not assumed to be constant near the gluing points, which was the case for previous such constructions.
Abstract: We present a local gluing construction for general relativistic initial data sets. The method applies to generic initial data, in a sense which is made precise. In particular the trace of the extrinsic curvature is not assumed to be constant near the gluing points, which was the case for previous such constructions. No global conditions on the initial data sets such as compactness, completeness, or asymptotic conditions are imposed. As an application, we prove existence of spatially compact, maximal globally hyperbolic, vacuum space-times without any closed constant mean curvature spacelike hypersurface.

Journal ArticleDOI
TL;DR: Based on the Lyapunov-Krasovskii functionals in combination with linear matrix inequality (LMI) approach, a set of criteria are proposed for the exponential stability of BAM neural networks with constant or time-varying delays as mentioned in this paper.
Abstract: Based on the Lyapunov–Krasovskii functionals in combination with linear matrix inequality (LMI) approach, a set of criteria are proposed for the exponential stability of BAM neural networks with constant or time-varying delays. These criteria manifest explicitly the influence of time delay on exponential convergence rate and show the differences between the excitatory and inhibitory effect. In addition, the obtained results are easily verified for determining the exponential stability of delayed BAM networks and impose less conservative and less restrictive than the ones in previous papers.

Journal ArticleDOI
Qingzhi Yang1
TL;DR: In this article, the variable-step basic projection algorithm and its relaxed version under weakly co-coercive condition were studied and the convergence of these two algorithms was established under certain conditions.