scispace - formally typeset
Search or ask a question

Showing papers on "Constant (mathematics) published in 2007"


Journal ArticleDOI
TL;DR: In this article, the authors discuss possible phase factors for the S-matrix of planar gauge theory, leading to modifications at four-loop order as compared to an earlier proposal, and present evidence that this choice is nonperturbatively related to a recently conjectured crossing-symmetric phase factor for perturbative string theory on AdS5? S5 once the constant is fixed to a particular value.
Abstract: We discuss possible phase factors for the S-matrix of planar gauge theory, leading to modifications at four-loop order as compared to an earlier proposal. While these result in a four-loop breakdown of perturbative BMN scaling, Kotikov?Lipatov transcendentality in the universal scaling function for large-spin twist operators may be preserved. One particularly natural choice, unique up to one constant, modifies the overall contribution of all terms containing odd-zeta functions in the earlier proposed scaling function based on a trivial phase. Excitingly, we present evidence that this choice is non-perturbatively related to a recently conjectured crossing-symmetric phase factor for perturbative string theory on AdS5 ? S5 once the constant is fixed to a particular value. Our proposal, if true, might therefore resolve the long-standing AdS/CFT discrepancies between gauge and string theory.

1,128 citations


Journal ArticleDOI
Xian-Ling Fan1
TL;DR: In this paper, the global C 1, α regularity of the bounded generalized solutions of the variable exponent elliptic equations in divergence form with both Dirichlet and Neumann boundary conditions was studied.

319 citations


Journal ArticleDOI
TL;DR: In this article, the authors determined the capacity region of the broadcast phase of the two-phase relay protocol, which can be achieved by applying XOR on the decoded messages at the relay node.
Abstract: In a three-node network a half-duplex relay node enables bidirectional communication between two nodes with a spectral efficient two phase protocol. In the first phase, two nodes transmit their message to the relay node, which decodes the messages and broadcast a re-encoded composition in the second phase. In this work we determine the capacity region of the broadcast phase. In this scenario each receiving node has perfect information about the message that is intended for the other node. The resulting set of achievable rates of the two-phase bidirectional relaying includes the region which can be achieved by applying XOR on the decoded messages at the relay node. We also prove the strong converse for the maximum error probability and show that this implies that the $[\eps_1,\eps_2]$-capacity region defined with respect to the average error probability is constant for small values of error parameters $\eps_1$, $\eps_2$.

237 citations


Book ChapterDOI
11 Sep 2007
TL;DR: The enumeration complexity of the natural extension of acyclic conjunctive queries with disequalities is studied and it is shown that for each query of free-connex treewidth bounded by some constant k, enumeration of results can be done with O(|M|k+1) precomputation steps and constant delay.
Abstract: We study the enumeration complexity of the natural extension of acyclic conjunctive queries with disequalities. In this language, a number of NP-complete problems can be expressed. We first improve a previous result of Papadimitriou and Yannakakis by proving that such queries can be computed in time c.|M|ċ|ϕ(M)| where M is the structure, ϕ(M) is the result set of the query and c is a simple exponential in the size of the formula ϕ. A consequence of our method is that, in the general case, tuples of such queries can be enumerated with a linear delay between two tuples. We then introduce a large subclass of acyclic formulas called CCQ≠ and prove that the tuples of a CCQ≠ query can be enumerated with a linear time precomputation and a constant delay between consecutive solutions. Moreover, under the hypothesis that the multiplication of two n×n boolean matrices cannot be done in time O(n2), this leads to the following dichotomy for acyclic queries: either such a query is in CCQ≠ or it cannot be enumerated with linear precomputation and constant delay. Furthermore we prove that testing whether an acyclic formula is in CCQ≠ can be performed in polynomial time. Finally, the notion of free-connex treewidth of a structure is defined. We show that for each query of free-connex treewidth bounded by some constant k, enumeration of results can be done with O(|M|k+1) precomputation steps and constant delay.

174 citations


Journal ArticleDOI
TL;DR: It is proved that feasible limit points that satisfy the Constant Positive Linear Dependence constraint qualification are KKT solutions and boundedness of the penalty parameters is proved.
Abstract: Two Augmented Lagrangian algorithms for solving KKT systems are introduced. The algorithms differ in the way in which penalty parameters are updated. Possibly infeasible accumulation points are characterized. It is proved that feasible limit points that satisfy the Constant Positive Linear Dependence constraint qualification are KKT solutions. Boundedness of the penalty parameters is proved under suitable assumptions. Numerical experiments are presented.

174 citations


Journal ArticleDOI
Shiwang Ma1
TL;DR: In this article, the authors studied the existence of traveling wave solutions for a class of delayed non-local reaction diffusion equations without quasi-monotonicity and showed that there exists a constant c∗ > 0 such that for each c>c ∗, the equation under consideration admits a traveling wavefront solution with speed c, which is not necessary to be monotonic.

152 citations


Proceedings ArticleDOI
29 Jul 2007
TL;DR: A simple and constructive proof that quantifier elimination in real algebra is doubly exponential, even when there is only one free variable and all polynomials in the quantified input are linear is given.
Abstract: This paper has two parts. In the first part we give a simple and constructive proof that quantifier elimination in real algebra is doubly exponential, even when there is only one free variable and all polynomials in the quantified input are linear. The general result is not new, but we hope the simple and explicit nature of the proof makes it interesting. The second part of the paper uses the construction of the first part to prove some results on the effects of projection order on CAD construction -- roughly that there are CAD construction problems for which one order produces a constant number of cells and another produces a doubly exponential number of cells, and that there are problems for which all orders produce a doubly exponential number of cells. The second of these results implies that there is a true singly vs. doubly exponential gap between the worst-case running times of several modern quantifier elimination algorithms and CAD-based quantifier elimination when the number of quantifier alternations is constant.

152 citations


Journal ArticleDOI
TL;DR: An extension of the Markovitz model, in which the variance has been replaced with the Value-at-Risk, is considered, and it is shown that the model leads to an NP-hard problem, but if the number of past observation T or theNumber of assets K are low, e.g. fixed to a constant, polynomial time algorithms exist.

149 citations


Journal ArticleDOI
TL;DR: The bounds on the error between an interpolating polynomial and the true function can be used in the convergence theory of derivative free sampling methods and this constant is related to the condition number of a certain matrix.
Abstract: We consider derivative free methods based on sampling approaches for nonlinear optimization problems where derivatives of the objective function are not available and cannot be directly approximated. We show how the bounds on the error between an interpolating polynomial and the true function can be used in the convergence theory of derivative free sampling methods. These bounds involve a constant that reflects the quality of the interpolation set. The main task of such a derivative free algorithm is to maintain an interpolation sampling set so that this constant remains small, and at least uniformly bounded. This constant is often described through the basis of Lagrange polynomials associated with the interpolation set. We provide an alternative, more intuitive, definition for this concept and show how this constant is related to the condition number of a certain matrix. This relation enables us to provide a range of algorithms whilst maintaining the interpolation set so that this condition number or the geometry constant remain uniformly bounded. We also derive bounds on the error between the model and the function and between their derivatives, directly in terms of this condition number and of this geometry constant.

148 citations


Journal ArticleDOI
TL;DR: Methods to stabilize a class of motion patterns for unit speed particles in the plane by measuring the relative distance between each pair of particles along the curve using the relative arc-length between the particles.

147 citations


Patent
12 Dec 2007
TL;DR: In this article, focus values of images acquired during the movement of the object lens are calculated, in which the focus value includes at least the intensity value of the image derived from the intensities of the pixels of an image.
Abstract: An autofocus searching method includes the following procedures. First, focus values of images, which are acquired during the movement of the object lens, are calculated, in which the focus value includes at least the intensity value of the image derived from the intensities of the pixels of the image. Next, focus searching is based on a first focus-searching step constant and a first focus-searching direction, in which the first focus-searching step constant is a function, e.g., the multiplication, of the focus value and a focus-searching step. If the focus searching position moves across a peak of the focus values, it is then amended to be based on a second focus-searching direction and a second focus-searching step constant, in which the second focus-searching step constant is smaller than the first focus-searching step constant, and the second focus-searching direction is opposite to the first focus-searching direction.

Journal ArticleDOI
TL;DR: In this paper, a constitutive relation for linear viscoelasticity of composite materials is formulated using the novel concept of Variable Order (VO) differintegrals, where the order of the derivative is allowed to be a function of the independent variable (time), rather than a constant of arbitrary order.
Abstract: A constitutive relation for linear viscoelasticity of composite materials is formulated using the novel concept of Variable Order (VO) differintegrals. In the model proposed in this work, the order of the derivative is allowed to be a function of the independent variable (time), rather than a constant of arbitrary order. We generalize previous works that used fractional derivatives for the stress and strain relationship by allowing a continuous spectrum of non-integer dynamics to describe the physical problem. Starting with the assumption that the order of the derivative is a measure of the rate of change of disorder within the material, we develop a statistical mechanical model that is in agreement with experimental results for strain rates varying more than eight orders of magnitude in value. We use experimental data for an epoxy resin and a carbon/epoxy composite undergoing constant compression rates in order to derive a VO constitutive equation that accurately models the linear viscoelastic deformation in time. The resulting dimensionless constitutive equation agrees well with all the normalized data while using a much smaller number of empirical coefficients when compared to available models in the literature.

Journal Article
TL;DR: In this paper, a chart is proposed that is similar to the S chart and uses rational groups of observations for monitoring the coefficient of variation, rather than the variance, must be constant.
Abstract: When monitoring variability in statistical process control, conventional Shewhart R and S charts are used where the in-control readings have a constant variance. These charts cannot be used, however, in certain instances when the coefficient of variation, rather than the variance, must be constant. A chart is proposed that is similar to the S chart and uses rational groups of observations for monitoring the coefficient of variation.

Proceedings Article
03 Dec 2007
TL;DR: This work proposes a new approach for dealing with the estimation of the location of change-points in one-dimensional piecewise constant signals observed in white noise by combining the LAR algorithm and a reduced version of the dynamic programming algorithm and applies it to synthetic and real data.
Abstract: We propose a new approach for dealing with the estimation of the location of change-points in one-dimensional piecewise constant signals observed in white noise. Our approach consists in reframing this task in a variable selection context. We use a penalized least-squares criterion with a e1-type penalty for this purpose. We prove some theoretical results on the estimated change-points and on the underlying piecewise constant estimated function. Then, we explain how to implement this method in practice by combining the LAR algorithm and a reduced version of the dynamic programming algorithm and we apply it to synthetic and real data.

Posted Content
TL;DR: In this paper, it is shown that Steiner structures are optimal constant dimension codes achieving the Wang-Xing-Safavi-Naini bound and the Johnson type upper bound II slightly improves on the Wang and Xing SSA bound.
Abstract: Very recently, an operator channel was defined by Koetter and Kschischang when they studied random network coding. They also introduced constant dimension codes and demonstrated that these codes can be employed to correct errors and/or erasures over the operator channel. Constant dimension codes are equivalent to the so-called linear authentication codes introduced by Wang, Xing and Safavi-Naini when constructing distributed authentication systems in 2003. In this paper, we study constant dimension codes. It is shown that Steiner structures are optimal constant dimension codes achieving the Wang-Xing-Safavi-Naini bound. Furthermore, we show that constant dimension codes achieve the Wang-Xing-Safavi-Naini bound if and only if they are certain Steiner structures. Then, we derive two Johnson type upper bounds, say I and II, on constant dimension codes. The Johnson type bound II slightly improves on the Wang-Xing-Safavi-Naini bound. Finally, we point out that a family of known Steiner structures is actually a family of optimal constant dimension codes achieving both the Johnson type bounds I and II.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the question of well-posedness for large data having critical Besov regularity and provided a new a priori estimate for a class of parabolic systems with variable coefficients.
Abstract: This paper is dedicated to the study of viscous compressible barotropic fluids in dimension N ≥ 2. We address the question of well-posedness for large data having critical Besov regularity. Our sole additional assumption is that the initial density be bounded away from zero. This improves the analysis of Danchin (2001) where the smallness of for some positive constant was needed. Our result relies on a new a priori estimate for a class of parabolic systems with variable coefficients, which is likely to be useful for the investigation of other models in fluid mechanics.

Journal ArticleDOI
TL;DR: In this paper, an effective strong coupling constant that is defined to be a low Q 2 extension of a previous definition by S. Brodsky et al. following an initial work of G. Grunberg is presented.

Proceedings ArticleDOI
07 Jan 2007
TL;DR: In this article, the integrality gap of the usual linear programming relaxation for Maxcut is shown to be 1 + e for general graphs. But for dense graphs, this gap drops to 1 + 1 after adding all linear constraints of support bounded by some constant depending on e.
Abstract: It is well-known that the integrality gap of the usual linear programming relaxation for Maxcut is 2 - e. For general graphs, we prove that for any e and any fixed bound k, adding linear constraints of support bounded by k does not reduce the gap below 2 - e. We generalize this to prove that for any e and any fixed bound k, strengthening the usual linear programming relaxation by doing κ rounds of Sherali-Adams lift-and-project does not reduce the gap below 2 - e. On the other hand, we prove that for dense graphs, this gap drops to 1 + e after adding all linear constraints of support bounded by some constant depending on e.

Journal ArticleDOI
TL;DR: In this article, the authors established a priori interior gradient estimates and existence theorems for n dimensional graphs S = {(x, u(x)) : x ∈ Ω} of constant mean curvature H > 0 in an n+ 1 dimensional Riemannian manifold of the form M × R where M is simply connected and complete and Ω is a bounded domain in M.
Abstract: In this paper we establish a priori interior gradient estimates and existence theorems for n dimensional graphs S = {(x, u(x)) : x ∈ Ω} of constant mean curvature H > 0 in an n+ 1 dimensional Riemannian manifolds of the form M × R where M is simply connected and complete and Ω is a bounded domain in M. Our aim is to illustrate the use of intrinsic methods that hold in great generality to obtain apriori estimates. In particular, we shall solve the Dirichlet problem for constant mean curvature graphs analogous to the results of Serrin [14, 15].

Patent
06 Dec 2007
TL;DR: In this article, a high pressure discharge lamp lighting device AC-lights a pair of electrodes in a light emission part, and includes a main circuit for outputting an AC lamp current, and a small signal control circuit for controlling the main circuit so that an output current becomes constant in a range in which the output current of the AC lamp becomes smaller than a rated lamp current.
Abstract: PROBLEM TO BE SOLVED: To provide exact control which is not affected by high-frequency noise in controlling to take measures against a problem caused by excessive protrusion growth. SOLUTION: The high pressure discharge lamp lighting device AC-lights a high pressure discharge lamp having a pair of electrodes in a light emission part, and includes a main circuit for outputting an AC lamp current, and a small signal control circuit for controlling the main circuit so that an output current becomes constant in a range in which the output current of the main circuit becomes smaller than a rated lamp current. The small signal control circuit includes a temperature detection circuit for detecting a temperature of a component structuring the main circuit, and a mode control means for switching an output current mode of the main circuit. The AC lamp current includes a first current mode for normal lighting, and a second current mode for melting a protrusion formed on the electrode. The mode control means is structured to apply the first current mode until a detected temperature by the temperature detection circuit reaches a predetermined value T1 or higher, and apply the second current mode after the detected temperature reaches the predetermined value T1 or higher. COPYRIGHT: (C)2010,JPO&INPIT

Journal ArticleDOI
TL;DR: In this paper, the existence of global integral manifolds of the quasilinear EPCAG is established when the associated linear homogeneous system has an exponential dichotomy.
Abstract: In this paper we introduce a general type of differential equations with piecewise constant argument (EPCAG). The existence of global integral manifolds of the quasilinear EPCAG is established when the associated linear homogeneous system has an exponential dichotomy. The smoothness of the manifolds is investigated. The existence of bounded and periodic solutions is considered. A new technique of investigation of equations with piecewise argument, based on an integral representation formula, is proposed. Appropriate illustrating examples are given.

Journal ArticleDOI
TL;DR: The model of the confined hydrogen atom (CHA) was developed by Michels et al. as mentioned in this paper, and it is possible to obtain the CHA eigenvalues with extremely high accuracy (up to 100 decimal digits) using two completely different methods.
Abstract: The model of the confined hydrogen atom (CHA) was developed by Michels et al. 1 in the mid-1930s to study matter subject to extreme pressure. However, since the eigenvalues cannot be obtained analytically, even the most accurate calculations have yielded little more than 10 figure accuracy. In this work, we show that it is possible to obtain the CHA eigenvalues with extremely high accuracy (up to 100 decimal digits) and we do that using two completely different methods. The first is based on formal solution of the confluent hypergeometric function while the second uses a series method. We also compare radial expectation values obtained by both methods and conclude that the wave functions obtained by these two different approaches are of high quality. In addition, we compute the hyperfine splitting constant, magnetic screening constant, polarizability in the Kirkwood approximation, and pressure as a function of the box radius. © 2007 Wiley Periodicals, Inc. Int J Quantum Chem, 2007

Journal ArticleDOI
TL;DR: It is shown that, using this approach, it is possible to construct any family of constant degree graphs in a dynamic environment, though with worse parameters, and it is expected that more distributed data structures could be designed and implemented in aynamic environment.
Abstract: We propose a new approach for constructing P2P networks based on a dynamic decomposition of a continuous space into cells corresponding to servers. We demonstrate the power of this approach by suggesting two new P2P architectures and various algorithms for them. The first serves as a DHT (distributed hash table) and the other is a dynamic expander network. The DHT network, which we call Distance Halving, allows logarithmic routing and load while preserving constant degrees. It offers an optimal tradeoff between degree and path length in the sense that degree d guarantees a path length of O(logdn). Another advantage over previous constructions is its relative simplicity. A major new contribution of this construction is a dynamic caching technique that maintains low load and storage, even under the occurrence of hot spots. Our second construction builds a network that is guaranteed to be an expander. The resulting topologies are simple to maintain and implement. Their simplicity makes it easy to modify and add protocols. A small variation yields a DHT which is robust against random Byzantine faults. Finally we show that, using our approach, it is possible to construct any family of constant degree graphs in a dynamic environment, though with worse parameters. Therefore, we expect that more distributed data structures could be designed and implemented in a dynamic environment.

Journal ArticleDOI
TL;DR: Under standard necessary conditions, two constructions of controllers of nested saturation type are provided which extends to the general case partial results of previous papers for arbitrary small bound on the control and large (constant) delay.
Abstract: This note deals with the problem of global asymptotic stabilization of linear systems by bounded static feedbacks subject to time delay. Under standard necessary conditions, we provide two constructions of controllers of nested saturation type which extends to the general case partial results of previous papers for arbitrary small bound on the control and large (constant) delay. To validate the approach, a third-order integrator and oscillator with multiplicity two example is presented

Journal ArticleDOI
TL;DR: In this article, the Widom-Dyson constant is derived for the Gaussian Unitary Ensemble in random matrix theory, and a new derivation of this constant is presented, which can be adapted to calculate similar critical constants in other problems arising in Random Matrix Theory.

Journal ArticleDOI
TL;DR: In this paper, the complexity of first-order queries over d-degree-bounded structures is considered as a global process, and it is shown that queries on such structures can be evaluated in total time f(vφv) where s is the structure, S is the formula, φ is the result of the query and f, g are some fixed functions.
Abstract: A relational structure is d-degree-bounded, for some integer d, if each element of the domain belongs to at most d tuples. In this paper, we revisit the complexity of the evaluation problem of not necessarily Boolean first-order (FO) queries over d-degree-bounded structures. Query evaluation is considered here as a dynamical process. We prove that any FO query on d-degree-bounded structures belongs to the complexity class constant-Delaylin, that is, can be computed by an algorithm that has two separate parts: it has a precomputation step of time linear in the size of the structure and then, it outputs all solutions (i.e., tuples that satisfy the formula) one by one with a constant delay (i.e., depending on the size of the formula only) between each. Seen as a global process, this implies that queries on d-degree-bounded structures can be evaluated in total time f(vφv).(vSv p vφ(S)v) and space g(vφv).vSv where S is the structure, φ is the formula, φ(S) is the result of the query and f, g are some fixed functions.Among other things, our results generalize a result of Seese on the data complexity of the model-checking problem for d-degree-bounded structures. Besides, the originality of our approach compared to related results is that it does not rely on the Hanf's model-theoretic technique and is simple and informative since it essentially rests on a quantifier elimination method.

Posted Content
TL;DR: In this article, the Japanese economic behavior is modeled as a sum two components: economic tend and fluctuations, and the trend is an inverse function of GDP per capita with a constant numerator.
Abstract: The Japanese economic behavior is modeled. GDP evolution is represented as a sum two components: economic tend and fluctuations. The trend is an inverse function of GDP per capita with a constant numerator. The growth rate fluctuations are numerically equal to two thirds of the relative change in the number of eighteen-year-olds. Inflation is represented by a linear function of labor force change rate. The models provide an accurate description for the poor economic performance and deflation separately. Using the models, GDP per capita is predicted for the next ten years and recommendations are given to overcome deflation.

Journal ArticleDOI
TL;DR: In this paper, a new ratio-based test of the null hypothesis that a time series exhibits no change in its persistence structure is proposed, against the alternative of a change in persistence from trend stationarity to difference stationarity or vice versa.
Abstract: . Using standardized cumulative sums of squared sub-sample residuals, we propose a new ratio-based test of the null hypothesis that a time series exhibits no change in its persistence structure [specifically that it displays constant I(1) behaviour] against the alternative of a change in persistence from trend stationarity to difference stationarity, or vice versa. Neither the direction nor location of any possible change under the alternative hypothesis need be assumed known. A key feature of our proposed test which distinguishes it from extant tests for persistence change [certain of which test the null hypothesis of constant I(0) behaviour while others, like our proposed test, test the null hypothesis of constant I(1) behaviour] is that it displays no tendency to spuriously over-reject when applied to series which, although not constant I(1) series, do not display a change in persistence [specifically are constant I(0) processes]. Moreover, where our ratio test correctly rejects the null of no persistence change, the tail in which the rejection occurs can also be used to identify the direction of change since, even in relatively small samples, the test almost never rejects in the right [left] tail when there is a change from I(0) to I(1) [I(1) to I(0)]. Again this useful property is not shared by existing tests. As a by-product of our analysis, we also propose breakpoint estimators which are consistent where the timing of the change in persistence is unknown.

Journal ArticleDOI
TL;DR: In this article, a special law of variation for Hubble's parameter is presented in a spatially homogeneous and anisotropic Bianchi type-I space-time that yields a constant value of deceleration parameter.
Abstract: A special law of variation for Hubble’s parameter is presented in a spatially homogeneous and anisotropic Bianchi type-I space-time that yields a constant value of deceleration parameter. Using the law of variation for Hubble’s parameter, exact solutions of Einstein’s field equations are obtained for Bianchi-I space-time filled with perfect fluid in two different cases where the universe exhibits power-law and exponential expansion. It is found that the solutions are consistent with the recent observations of type Ia supernovae. A detailed study of physical and kinematical properties of the models is carried out.

Book ChapterDOI
17 Dec 2007
TL;DR: This paper significantly improves the best known constant from 72 to 38, using a novel approach based on a 4-approximation that is devised for the subproblem where the points of P are located below a line l and contained in the subset of disks of D centered above l.
Abstract: Given a set P of points in the plane, and a set D of unit disks of fixed location, the discrete unit disk cover problem is to find a minimum-cardinality subset D′ ⊆ D that covers all points of P. This problem is a geometric version of the general set cover problem, where the sets are defined by a collection of unit disks. It is still NP-hard, but while the general set cover problem is not approximable within c log |P|, for some constant c, the discrete unit disk cover problem was shown to admit a constant-factor approximation. Due to its many important applications, e.g., in wireless network design, much effort has been invested in trying to reduce the constant of approximation of the discrete unit disk cover problem. In this paper we significantly improve the best known constant from 72 to 38, using a novel approach. Our solution is based on a 4-approximation that we devise for the subproblem where the points of P are located below a line l and contained in the subset of disks of D centered above l. This problem is of independent interest.