scispace - formally typeset
Search or ask a question

Showing papers on "Function (mathematics) published in 2010"


Journal ArticleDOI
TL;DR: This work reports simultaneous measurements of the positions, velocities, and orientations as a function of time for up to a thousand wild-type Bacillus subtilis bacteria in a colony, demonstrating that bacteria are an excellent system to study the general phenomenon of collective motion.
Abstract: Flocking birds, fish schools, and insect swarms are familiar examples of collective motion that plays a role in a range of problems, such as spreading of diseases. Models have provided a qualitative understanding of the collective motion, but progress has been hindered by the lack of detailed experimental data. Here we report simultaneous measurements of the positions, velocities, and orientations as a function of time for up to a thousand wild-type Bacillus subtilis bacteria in a colony. The bacteria spontaneously form closely packed dynamic clusters within which they move cooperatively. The number of bacteria in a cluster exhibits a power-law distribution truncated by an exponential tail. The probability of finding clusters with large numbers of bacteria grows markedly as the bacterial density increases. The number of bacteria per unit area exhibits fluctuations far larger than those for populations in thermal equilibrium. Such “giant number fluctuations” have been found in models and in experiments on inert systems but not observed previously in a biological system. Our results demonstrate that bacteria are an excellent system to study the general phenomenon of collective motion.

552 citations


Journal ArticleDOI
TL;DR: Numerical results for frequency assignment, maximum stable set and binary integer quadratic programming problems demonstrate that the algorithms are robust and very efficient due to their ability or exploit special structures, such as sparsity and constraint orthogonality in these problems.
Abstract: We present an alternating direction dual augmented Lagrangian method for solving semidefinite programming (SDP) problems in standard form. At each iteration, our basic algorithm minimizes the augmented Lagrangian function for the dual SDP problem sequentially, first with respect to the dual variables corresponding to the linear constraints, and then with respect to the dual slack variables, while in each minimization keeping the other variables fixed, and then finally it updates the Lagrange multipliers (i.e., primal variables). Convergence is proved by using a fixed-point argument. For SDPs with inequality constraints and positivity constraints, our algorithm is extended to separately minimize the dual augmented Lagrangian function over four sets of variables. Numerical results for frequency assignment, maximum stable set and binary integer quadratic programming problems demonstrate that our algorithms are robust and very efficient due to their ability or exploit special structures, such as sparsity and constraint orthogonality in these problems.

412 citations


BookDOI
01 Jan 2010

394 citations


Journal ArticleDOI
01 Jun 2010
TL;DR: This paper studies optimal linear-consensus algorithms for multivehicle systems with single-integrator dynamics in both continuous-time and discrete-time settings and shows that any symmetric Laplacian matrix is inverse optimal with respect to a properly chosen cost function.
Abstract: Laplacian matrices play an important role in linear-consensus algorithms. This paper studies optimal linear-consensus algorithms for multivehicle systems with single-integrator dynamics in both continuous-time and discrete-time settings. We propose two global cost functions, namely, interaction-free and interaction-related cost functions. With the interaction-free cost function, we derive the optimal (nonsymmetric) Laplacian matrix by using a linear-quadratic-regulator-based method in both continuous-time and discrete-time settings. It is shown that the optimal (nonsymmetric) Laplacian matrix corresponds to a complete directed graph. In addition, we show that any symmetric Laplacian matrix is inverse optimal with respect to a properly chosen cost function. With the interaction-related cost function, we derive the optimal scaling factor for a prespecified symmetric Laplacian matrix associated with the interaction graph in both continuous-time and discrete-time settings. Illustrative examples are given as a proof of concept.

333 citations


01 Jan 2010
TL;DR: In this article, the authors prove that the expected value of monotone function of an uncertain variable is a Lebesgue-Stieltjes integral of the function with respect to its uncertainty distribution.
Abstract: Uncertainty theory is a branch of mathematics based on normality, monotonicity, self-duality, countable subadditivity, and product measure axioms. Different from randomness and fuzziness, uncertainty theory provides a new mathematical model for uncertain phenomena. A key concept to describe uncertain quantity is uncertain variable, and expected value operator provides an average value of uncertain variable in the sense of uncertain measure. This paper will prove that the expected value of monotone function of uncertain variable is just a Lebesgue-Stieltjes integral of the function with respect to its uncertainty distribution, and give some useful expressions of expected value of function of uncertain variables. c

297 citations


Journal ArticleDOI
TL;DR: The proposed method has the same functional capabilities as a structural optimization method based on the level set method incorporating perimeter control functions and is applied to two-dimensional linear elastic and vibration optimization problems such as the minimum compliance problem, a compliant mechanism design problem and the eigenfrequency maximization problem.

291 citations


Posted ContentDOI
01 Jan 2010
TL;DR: The current version of the Ecofin Council approved production function (PF) methodology is used for assessing both the productive capacity and cyclical position (i.e. output gaps) of EU economies.
Abstract: This paper provides a detailed description of the current version of the Ecofin Council approved production function (PF) methodology which is used for assessing both the productive capacity (i.e. potential output) and cyclical position (i.e. output gaps) of EU economies. Compared with the previous 2010 paper on the same topic, there have been two significant changes to the PF methodology, namely an overhaul of the NAWRU methodology & the introduction of a new T+10 methodology.

281 citations


Journal ArticleDOI
TL;DR: A double image encryption by using random binary encoding and gyrator transform and an iterative structure composed of the random binary encode method is designed and employed for enhancing the security of encryption algorithm.
Abstract: We propose a double image encryption by using random binary encoding and gyrator transform. Two secret images are first regarded as the real part and imaginary part of complex function. Chaotic map is used for obtaining random binary matrix. The real part and imaginary part of complex function are exchanged under the control of random binary data. An iterative structure composed of the random binary encoding method is designed and employed for enhancing the security of encryption algorithm. The parameters in chaotic map and gyrator transform serve as the keys of this encryption scheme. Some numerical simulations have been made, to demonstrate the performance this algorithm.

257 citations


Journal ArticleDOI
TL;DR: A path algorithm for solving the Fused Lasso Signal Approximator that computes the solutions for all values of λ1 and λ2 and presents an approximate algorithm that has considerable speed advantages for a moderate trade-off in accuracy.
Abstract: The Lasso is a very well-known penalized regression model, which adds an L1 penalty with parameter λ1 on the coefficients to the squared error loss function. The Fused Lasso extends this model by also putting an L1 penalty with parameter λ2 on the difference of neighboring coefficients, assuming there is a natural ordering. In this article, we develop a path algorithm for solving the Fused Lasso Signal Approximator that computes the solutions for all values of λ1 and λ2. We also present an approximate algorithm that has considerable speed advantages for a moderate trade-off in accuracy. In the Online Supplement for this article, we provide proofs and further details for the methods developed in the article.

256 citations


Journal ArticleDOI
TL;DR: In this paper, the first analytic computation of the two-loop six-edged Wilson loop was performed in the quasi-multi-regge kinematics of a pair along the ladder.
Abstract: In the planar N=4 supersymmetric Yang-Mills theory, the conformal symmetry constrains multi-loop n-edged Wilson loops to be given in terms of the one-loop n-edged Wilson loop, augmented, for n greater than 6, by a function of conformally invariant cross ratios. That function is termed the remainder function. In a recent paper, we have displayed the first analytic computation of the two-loop six-edged Wilson loop, and thus of the corresponding remainder function. Although the calculation was performed in the quasi-multi-Regge kinematics of a pair along the ladder, the Regge exactness of the six-edged Wilson loop in those kinematics entails that the result is the same as in general kinematics. We show in detail how the most difficult of the integrals is computed, which contribute to the six-edged Wilson loop. Finally, the remainder function is given as a function of uniform transcendental weight four in terms of Goncharov polylogarithms. We consider also some asymptotic values of the remainder function, and the value when all the cross ratios are equal.

254 citations


Posted Content
TL;DR: In this paper, a generalization of stochastic bandits where the set of arms is allowed to be a generic measurable space and the mean-payoff function is locally Lipschitz with respect to a dissimilarity function that is known to the decision maker is considered.
Abstract: We consider a generalization of stochastic bandits where the set of arms, $\cX$, is allowed to be a generic measurable space and the mean-payoff function is "locally Lipschitz" with respect to a dissimilarity function that is known to the decision maker. Under this condition we construct an arm selection policy, called HOO (hierarchical optimistic optimization), with improved regret bounds compared to previous results for a large class of problems. In particular, our results imply that if $\cX$ is the unit hypercube in a Euclidean space and the mean-payoff function has a finite number of global maxima around which the behavior of the function is locally continuous with a known smoothness degree, then the expected regret of HOO is bounded up to a logarithmic factor by $\sqrt{n}$, i.e., the rate of growth of the regret is independent of the dimension of the space. We also prove the minimax optimality of our algorithm when the dissimilarity is a metric. Our basic strategy has quadratic computational complexity as a function of the number of time steps and does not rely on the doubling trick. We also introduce a modified strategy, which relies on the doubling trick but runs in linearithmic time. Both results are improvements with respect to previous approaches.

Journal ArticleDOI
TL;DR: In this paper, a unified framework for establishing consistency and convergence rates for regularized M-estimators under high-dimensional scaling was provided, which can be used to re-derive some existing results.
Abstract: High-dimensional statistical inference deals with models in which the the number of parameters p is comparable to or larger than the sample size n. Since it is usually impossible to obtain consistent procedures unless $p/n\rightarrow0$, a line of recent work has studied models with various types of low-dimensional structure, including sparse vectors, sparse and structured matrices, low-rank matrices and combinations thereof. In such settings, a general approach to estimation is to solve a regularized optimization problem, which combines a loss function measuring how well the model fits the data with some regularization function that encourages the assumed structure. This paper provides a unified framework for establishing consistency and convergence rates for such regularized M-estimators under high-dimensional scaling. We state one main theorem and show how it can be used to re-derive some existing results, and also to obtain a number of new results on consistency and convergence rates, in both $\ell_2$-error and related norms. Our analysis also identifies two key properties of loss and regularization functions, referred to as restricted strong convexity and decomposability, that ensure corresponding regularized M-estimators have fast convergence rates and which are optimal in many well-studied cases.

Proceedings Article
21 Jun 2010
TL;DR: The Greedy-GQ algorithm is an extension of recent work on gradient temporal-difference learning to a control setting in which the target policy is greedy with respect to a linear approximation to the optimal action-value function.
Abstract: We present the first temporal-difference learning algorithm for off-policy control with unrestricted linear function approximation whose per-time-step complexity is linear in the number of features. Our algorithm, Greedy-GQ, is an extension of recent work on gradient temporal-difference learning, which has hitherto been restricted to a prediction (policy evaluation) setting, to a control setting in which the target policy is greedy with respect to a linear approximation to the optimal action-value function. A limitation of our control setting is that we require the behavior policy to be stationary. We call this setting latent learning because the optimal policy, though learned, is not manifest in behavior. Popular off-policy algorithms such as Q-learning are known to be unstable in this setting when used with linear function approximation.

Journal ArticleDOI
TL;DR: Using the new fractional Taylor’s series, two new families of fractional Black–Scholes equations are derived, and some proposals to introduce real data and virtual data in the basic equation of stock exchange dynamics are made.
Abstract: By using the new fractional Taylor’s series of fractional order f ( x + h ) = E α ( h α D x α ) f ( x ) where E α ( . ) denotes the Mittag–Leffler function, and D x α is the so-called modified Riemann–Liouville fractional derivative which we introduced recently to remove the effects of the non-zero initial value of the function under consideration, one can meaningfully consider a modeling of fractional stochastic differential equations as a fractional dynamics driven by a (usual) Gaussian white noise. One can then derive two new families of fractional Black–Scholes equations, and one shows how one can obtain their solutions. Merton’s optimal portfolio is once more considered and some new results are contributed, with respect to the modeling on one hand, and to the solution on the other hand. Finally, one makes some proposals to introduce real data and virtual data in the basic equation of stock exchange dynamics.

01 Jan 2010
TL;DR: In this article, the moments of sums of sequences of independent random variables were derived for the first time, and they were shown to be tighter than previous bounds due to Johnson et al. [10] and Latala [12].
Abstract: We provide bounds for moments of sums of sequences of independent random variables. Concentrating on uniformly bounded nonnegative random variables, we are able to improve upon previous results due to Johnson et al. [10] and Latala [12]. Our basic results provide bounds involving Stirling numbers of the second kind and Bell numbers. By deriving novel effective bounds on Bell numbers and the related Bell function, we are able to translate our moment bounds to explicit ones, which are tighter than previous bounds. The study was motivated by a problem in operation research, in which it was required to estimate the Lp-moments of sums of uniformly bounded non-negative random variables representing the processing times of jobs that were assigned to some machine in terms of the expectation of their sum.2000 AMS Mathematics Subject Classification: Primary: 60E15; Secondary: 05A18, 11B73, 26D15.

Journal ArticleDOI
TL;DR: In this article, a Bayes linear approach is presented to identify the subset of the input space that could give rise to acceptable matches between model output and measured data, which is known as history matching.
Abstract: In many scientific disciplines complex computer models are used to understand the behaviour of large scale physical systems. An uncertainty anal- ysis of such a computer model known as Galform is presented. Galform models the creation and evolution of approximately one million galaxies from the begin- ning of the Universe until the current day, and is regarded as a state-of-the-art model within the cosmology community. It requires the specification of many in- put parameters in order to run the simulation, takes significant time to run, and provides various outputs that can be compared with real world data. A Bayes Linear approach is presented in order to identify the subset of the input space that could give rise to acceptable matches between model output and measured data. This approach takes account of the major sources of uncertainty in a consistent and unified manner, including input parameter uncertainty, function uncertainty, observational error, forcing function uncertainty and structural uncertainty. The approach is known as History Matching, and involves the use of an iterative suc- cession of emulators (stochastic belief specifications detailing beliefs about the Galform function), which are used to cut down the input parameter space. The analysis was successful in producing a large collection of model evaluations that exhibit good fits to the observed data.

Journal ArticleDOI
TL;DR: In this article, a sufficient and necessary condition of uncertainty distribution is proved to show what function is an uncertainty distribution, and some special examples are given at the end of the paper.
Abstract: Uncertainty distribution is an important tool to specify an uncertain variable. In this paper, a sufficient and necessary condition of uncertainty distribution is proved to show what function is an uncertainty distribution. At the end of the paper, some special examples are given.

Book ChapterDOI
14 Sep 2010
TL;DR: The first contribution is to propose an efficient algorithm to compute ranking functions that can handle flowcharts of arbitrary structure, the class of candidate rankings it explores is larger, and the method, although greedy, is provably complete.
Abstract: Proving the termination of a flowchart program can be done by exhibiting a ranking function, i.e., a function from the program states to a well-founded set, which strictly decreases at each program step. A standard method to automatically generate such a function is to compute invariants for each program point and to search for a ranking in a restricted class of functions that can be handled with linear programming techniques. Previous algorithms based on affine rankings either are applicable only to simple loops (i.e., single-node flowcharts) and rely on enumeration, or are not complete in the sense that they are not guaranteed to find a ranking in the class of functions they consider, if one exists. Our first contribution is to propose an efficient algorithm to compute ranking functions: It can handle flowcharts of arbitrary structure, the class of candidate rankings it explores is larger, and our method, although greedy, is provably complete. Our second contribution is to show how to use the ranking functions we generate to get upper bounds for the computational complexity (number of transitions) of the source program. This estimate is a polynomial, which means that we can handle programs with more than linear complexity. We applied the method on a collection of test cases from the literature. We also show the links and differences with previous techniques based on the insertion of counters.

Journal ArticleDOI
TL;DR: The efficient approximation of functions by sums of exponentials or Gaussians in Beylkin and Monzon (2005) is revisited to discuss several new results and applications, and the Poisson summation is used to discretize integral representations of e.g., power functions r − β, β > 0.

Journal ArticleDOI
TL;DR: In this paper, the authors measured and studied the evolution of the UV galaxy Luminosity Function (LF) at z = 3-5 from the largest high-redshift survey to date, the Deep part of the CFHT Legacy Survey.
Abstract: We measure and study the evolution of the UV galaxy Luminosity Function (LF) at z=3-5 from the largest high-redshift survey to date, the Deep part of the CFHT Legacy Survey. We also give accurate estimates of the SFR density at these redshifts. We consider ~100,000 Lyman-break galaxies at z~3.1, 3.8 & 4.8 selected from very deep ugriz images of this data set and estimate their rest-frame 1600A luminosity function. Due to the large survey volume, cosmic variance plays a negligible role. Furthermore, we measure the bright end of the LF with unprecedented statistical accuracy. Contamination fractions from stars and low-z galaxy interlopers are estimated from simulations. To correct for incompleteness, we study the detection rate of simulated galaxies injected to the images as a function of magnitude and redshift. We estimate the contribution of several systematic effects in the analysis to test the robustness of our results. We find the bright end of the LF of our u-dropout sample to deviate significantly from a Schechter function. If we modify the function by a recently proposed magnification model, the fit improves. For the first time in an LBG sample, we can measure down to the density regime where magnification affects the shape of the observed LF because of the very bright and rare galaxies we are able to probe with this data set. We find an increase in the normalisation, $\phi^{*}$, of the LF by a factor of 2.5 between z~5 and z~3. The faint-end slope of the LF does not evolve significantly between z~5 and z~3. We do not find a significant evolution of the characteristic magnitude in the studied redshift interval. The SFR density is found to increase by a factor of ~2 from z~5 to z~4. The evolution from z~4 to z~3 is less eminent.

Journal ArticleDOI
TL;DR: In this paper, the Dotsenko-Fateev multiple integral is treated as a perturbed double-Selberg matrix model and the planar free energy in the q-expansion is computed to the lowest non-trivial order.

Posted Content
03 Sep 2010
TL;DR: The complexity of stochastic convex optimization in an oracle model of computation is studied and tight minimax complexity estimates are obtained for various function classes.
Abstract: Relative to the large literature on upper bounds on complexity of convex optimization, lesser attention has been paid to the fundamental hardness of these problems. Given the extensive use of convex optimization in machine learning and statistics, gaining an understanding of these complexity-theoretic issues is important. In this paper, we study the complexity of stochastic convex optimization in an oracle model of computation. We improve upon known results and obtain tight minimax complexity estimates for various function classes.

Journal ArticleDOI
TL;DR: This paper study analytically the convergence behavior of the local RBF method as a function of the number of nodes employed in the scheme, the nodal distance, and the shape parameter finds that there is an optimal value of the shape parameters for which the error is minimum.

Journal ArticleDOI
TL;DR: In this method new stochastic variables and orthogonal polynomials are constructed as time progresses and the solution can be represented exactly by linear functions, allowing the method to use only low order polynomial approximations with high accuracy.

Proceedings ArticleDOI
12 Apr 2010
TL;DR: A natural metric is introduced between sets of sensors that can be used to construct covariance functions over sets, and thereby perform Gaussian process inference over a function whose domain is a power set.
Abstract: We consider the problem of selecting an optimal set of sensors, as determined, for example, by the predictive accuracy of the resulting sensor network. Given an underlying metric between pairs of set elements, we introduce a natural metric between sets of sensors for this task. Using this metric, we can construct covariance functions over sets, and thereby perform Gaussian process inference over a function whose domain is a power set. If the function has additional inputs, our covariances can be readily extended to incorporate them---allowing us to consider, for example, functions over both sets and time. These functions can then be optimized using Gaussian process global optimization (GPGO). We use the root mean squared error (RMSE) of the predictions made using a set of sensors at a particular time as an example of such a function to be optimized; the optimal point specifies the best choice of sensor locations. We demonstrate the resulting method by dynamically selecting the best subset of a given set of weather sensors for the prediction of the air temperature across the United Kingdom.

Journal ArticleDOI
TL;DR: The non-manipulability and the ordinal efficiency of the probabilistic serial mechanism support its implementation instead of random serial dictatorship in large assignment problems.

Posted Content
TL;DR: In this paper, it was shown that any boolean function can be evaluated optimally by a quantum query algorithm that alternates a fixed, input-independent reflection with a second reflection that coherently queries the input string.
Abstract: We show that any boolean function can be evaluated optimally by a quantum query algorithm that alternates a certain fixed, input-independent reflection with a second reflection that coherently queries the input string. Originally introduced for solving the unstructured search problem, this two-reflections structure is therefore a universal feature of quantum algorithms. Our proof goes via the general adversary bound, a semi-definite program (SDP) that lower-bounds the quantum query complexity of a function. By a quantum algorithm for evaluating span programs, this lower bound is known to be tight up to a sub-logarithmic factor. The extra factor comes from converting a continuous-time query algorithm into a discrete-query algorithm. We give a direct and simplified quantum algorithm based on the dual SDP, with a bounded-error query complexity that matches the general adversary bound. Therefore, the general adversary lower bound is tight; it is in fact an SDP for quantum query complexity. This implies that the quantum query complexity of the composition f(g,...,g) of two boolean functions f and g matches the product of the query complexities of f and g, without a logarithmic factor for error reduction. It further shows that span programs are equivalent to quantum query algorithms.

Journal ArticleDOI
TL;DR: This paper proposes a generalized probability density function based on the nth power of a cosine-squared function that derives the average covariance matrix for various different elementary scatterers and shows that the result has a very simple analytical form suitable for use in model-based decomposition schemes.
Abstract: Current polarimetric model-based decomposition techniques are limited to specific types of vegetation because of their assumptions about the volume scattering component. In this paper, we propose a generalized probability density function based on the nth power of a cosine-squared function. This distribution is completely characterized by two parameters; a mean orientation angle and the power of the cosine-squared function. We show that the underlying randomness of the distribution is only a function of the power of the cosine-squared function. We then derive the average covariance matrix for various different elementary scatterers showing that the result has a very simple analytical form suitable for use in model-based decomposition schemes.

Proceedings Article
08 Jul 2010
TL;DR: In this article, the authors consider two variables that are related to each other by an invertible function and show that if the function and the probability density of the cause are chosen independently, then the distribution of the effect will in a certain sense depend on the function.
Abstract: We consider two variables that are related to each other by an invertible function. While it has previously been shown that the dependence structure of the noise can provide hints to determine which of the two variables is the cause, we presently show that even in the deterministic (noise-free) case, there are asymmetries that can be exploited for causal inference. Our method is based on the idea that if the function and the probability density of the cause are chosen independently, then the distribution of the effect will, in a certain sense, depend on the function. We provide a theoretical analysis of this method, showing that it also works in the low noise regime, and link it to information geometry. We report strong empirical results on various real-world data sets from different domains.

Journal ArticleDOI
TL;DR: In this paper, the authors measured the clustering of dark matter halos in a large set of collisionless cosmological simulations of the flat LCDM cosmology, and used fitting functions for the large scale bias that are adaptable to any value of Delta.
Abstract: We measure the clustering of dark matter halos in a large set of collisionless cosmological simulations of the flat LCDM cosmology. Halos are identified using the spherical overdensity algorithm, which finds the mass around isolated peaks in the density field such that the mean density is Delta times the background. We calibrate fitting functions for the large scale bias that are adaptable to any value of Delta we examine. We find a ~6% scatter about our best fit bias relation. Our fitting functions couple to the halo mass functions of Tinker et. al. (2008) such that bias of all dark matter is normalized to unity. We demonstrate that the bias of massive, rare halos is higher than that predicted in the modified ellipsoidal collapse model of Sheth, Mo, & Tormen (2001), and approaches the predictions of the spherical collapse model for the rarest halos. Halo bias results based on friends-of-friends halos identified with linking length 0.2 are systematically lower than for halos with the canonical Delta=200 overdensity by ~10%. In contrast to our previous results on the mass function, we find that the universal bias function evolves very weakly with redshift, if at all. We use our numerical results, both for the mass function and the bias relation, to test the peak-background split model for halo bias. We find that the peak-background split achieves a reasonable agreement with the numerical results, but ~20% residuals remain, both at high and low masses.