scispace - formally typeset
Search or ask a question

Showing papers on "Constant (mathematics) published in 2020"


Journal ArticleDOI
25 Nov 2020
TL;DR: A simple formal analysis of constant product markets and their generalizations is given, showing that, under some common conditions, these markets must closely track the reference market price.
Abstract: Uniswap---and other constant product markets---appear to work well in practice despite their simplicity. In this paper, we give a simple formal analysis of constant product markets and their generalizations, showing that, under some common conditions, these markets must closely track the reference market price. We also show that Uniswap satisfies many other desirable properties and numerically demonstrate, via a large-scale agent-based simulation, that Uniswap is stable under a wide range of market conditions.

102 citations


Journal ArticleDOI
TL;DR: The aperiodic sampled-data control law is utilized and an updated Lyapunov functional is developed from the augmentation of Wirtinger's inequality, which results in efficient and simplified synchronization conditions for network systems with nonlinear dynamics.
Abstract: In this paper, the synchronization issue for network systems with nonlinear dynamics is considered. Together with zero-order holder, the aperiodic sampled-data control law is utilized. Compared with the traditional periodic sampled-data control method, this approach demonstrates more greater flexibility. Adopting input delay approach, the initial sampled-data system is remodeled by continuous time system involving time-varying delays in the control signals. For the purpose of designing the sampling controllers suffering constant delays, an updated Lyapunov functional is developed from the augmentation of Wirtinger's inequality. Such a Lyapunov functional results in efficient and simplified synchronization conditions. A sufficient condition for synchronizability of network systems is set up. Then, for the case of unstable systems with some constant delays, a fresh discretized Lyapunov functional is introduced. Finally, we utilize the numerical simulation outcomes to prove the efficacy and advantage of our algorithm. Moreover, based on the network unmanned ground vehicle systems, the experiment results in a real scenario are provided to illustrate the effectiveness of the designed synchronization scheme.

95 citations


Journal ArticleDOI
TL;DR: In this article, the authors considered the minimization of an objective function given access to unbiased estimates of its gradient through stochastic gradient descent (SGD) with constant step-size, and provided an explicit asymptotic expansion of the moments of the averaged SGD iterates that outlines the dependence on initial conditions, the effect of noise and the step size.
Abstract: We consider the minimization of an objective function given access to unbiased estimates of its gradient through stochastic gradient descent (SGD) with constant step-size. While the detailed analysis was only performed for quadratic functions, we provide an explicit asymptotic expansion of the moments of the averaged SGD iterates that outlines the dependence on initial conditions, the effect of noise and the step-size, as well as the lack of convergence in the general (non-quadratic) case. For this analysis, we bring tools from Markov chain theory into the analysis of stochastic gradient. We then show that Richardson-Romberg extrapolation may be used to get closer to the global optimum and we show empirical improvements of the new extrapolation scheme.

87 citations


Posted Content
TL;DR: This work introduces LiPopt, a polynomial optimization framework for computing increasingly tighter upper bound on the Lipschitz constant of neural networks, and shows how to use structural properties of the network, such as sparsity, to significantly reduce the complexity of computation.
Abstract: We introduce LiPopt, a polynomial optimization framework for computing increasingly tighter upper bounds on the Lipschitz constant of neural networks. The underlying optimization problems boil down to either linear (LP) or semidefinite (SDP) programming. We show how to use the sparse connectivity of a network, to significantly reduce the complexity of computation. This is specially useful for convolutional as well as pruned neural networks. We conduct experiments on networks with random weights as well as networks trained on MNIST, showing that in the particular case of the $\ell_\infty$-Lipschitz constant, our approach yields superior estimates, compared to baselines available in the literature.

83 citations


Journal ArticleDOI
TL;DR: In this article, the existence of two constant sign solutions for a general class of double phase problems with convection term was shown provided the parameter is larger than the first eigenvalue of the p -Laplacian.
Abstract: We study parametric double phase problems involving superlinear nonlinearities with a growth that need not necessarily be polynomial. Based on truncation and comparison methods the existence of two constant sign solutions is shown provided the parameter is larger than the first eigenvalue of the p -Laplacian. As a result of independent interest we prove a priori estimates for solutions for a general class of double phase problems with convection term.

74 citations


Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a unified single image dehazing network that jointly estimates the transmission map and performs de-hazing by using an end-to-end learning framework, where the inherent transmission maps and dehazed result are learned jointly from the loss function.
Abstract: Single image haze removal is an extremely challenging problem due to its inherent ill-posed nature. Several prior-based and learning-based methods have been proposed in the literature to solve this problem and they have achieved visually appealing results. However, most of the existing methods assume constant atmospheric light model and tend to follow a two-step procedure involving prior-based methods for estimating transmission map followed by calculation of dehazed image using the closed form solution. In this paper, we relax the constant atmospheric light assumption and propose a novel unified single image dehazing network that jointly estimates the transmission map and performs dehazing. In other words, our new approach provides an end-to-end learning framework, where the inherent transmission map and dehazed result are learned jointly from the loss function. The extensive experiments evaluated on synthetic and real datasets with challenging hazy images demonstrate that the proposed method achieves significant improvements over the state-of-the-art methods.

74 citations


Journal ArticleDOI
01 Jan 2020
TL;DR: In this paper, a method using contour integration to derive definite integrals and their associated infinite sums which can be expressed as a special function is presented, where the advantage of using special functions is their analytic continuation which widens the range of parameters of the definite integral over which the formula is valid.
Abstract: We present a method using contour integration to derive definite integrals and their associated infinite sums which can be expressed as a special function. We give a proof of the basic equation and some examples of the method. The advantage of using special functions is their analytic continuation which widens the range of the parameters of the definite integral over which the formula is valid. We give as examples definite integrals of logarithmic functions times a trigonometric function. In various cases these generalizations evaluate to known mathematical constants such as Catalan's constant and $\pi$.

72 citations


Posted Content
TL;DR: It is shown that the transition to linearity is not a general property of wide neural networks and does not hold when the last layer of the network is non-linear, and it is also not necessary for successful optimization by gradient descent.
Abstract: The goal of this work is to shed light on the remarkable phenomenon of transition to linearity of certain neural networks as their width approaches infinity. We show that the transition to linearity of the model and, equivalently, constancy of the (neural) tangent kernel (NTK) result from the scaling properties of the norm of the Hessian matrix of the network as a function of the network width. We present a general framework for understanding the constancy of the tangent kernel via Hessian scaling applicable to the standard classes of neural networks. Our analysis provides a new perspective on the phenomenon of constant tangent kernel, which is different from the widely accepted "lazy training". Furthermore, we show that the transition to linearity is not a general property of wide neural networks and does not hold when the last layer of the network is non-linear. It is also not necessary for successful optimization by gradient descent.

62 citations


Book ChapterDOI
01 Oct 2020
TL;DR: In this article, the authors present two universally composable, actively secure, constant round multi-party protocols for generating BMR garbled circuits with free-XOR and reduced costs.
Abstract: In this work, we present two new universally composable, actively secure, constant round multi-party protocols for generating BMR garbled circuits with free-XOR and reduced costs.

58 citations


Journal ArticleDOI
TL;DR: Unified sufficient stability criteria on NSpS-M and GAS a.s. are derived using the notions of average impulsive switched interval and Poisson process to solve the problem of almost sure global asymptotic stability for a class of random nonlinear time-varying Impulsive switched systems.
Abstract: This paper investigates the problem of noise-to-state practical stability in mean (NSpS-M) (which is a natural generalization of noise-to-state stability in mean) and the problem of almost sure global asymptotic stability (GAS a.s.) for a class of random nonlinear time-varying impulsive switched systems. By using the notions of average impulsive switched interval and Poisson process, unified sufficient stability criteria on NSpS-M and GAS a.s. are derived. Two remarkable distinctions from the existing results lie in that: (1) stabilizing, inactive and destabilizing impulses are simultaneously considered; (2) the coefficient of the derivative of a Lyapunov function is allowed to be a time-varying function which can be both positive and negative and may even be unbounded. As an accompaniment, a less conservative unified criterion on NSpS-M for a special case is also presented by taking into account the stabilization role of the gain constant of the time-varying coefficient. Two examples are provided to illustrate the effectiveness of our derived criteria.

56 citations


Proceedings Article
01 Jan 2020
TL;DR: In this paper, the Lipschitz constant of a neural network is derived from the norm of the generalized Jacobian, and a sufficient condition for which backpropagation always returns an element of the Jacobian is presented.
Abstract: The local Lipschitz constant of a neural network is a useful metric with applications in robustness, generalization, and fairness evaluation. We provide novel analytic results relating the local Lipschitz constant of nonsmooth vector-valued functions to a maximization over the norm of the generalized Jacobian. We present a sufficient condition for which backpropagation always returns an element of the generalized Jacobian, and reframe the problem over this broad class of functions. We show strong inapproximability results for estimating Lipschitz constants of ReLU networks, and then formulate an algorithm to compute these quantities exactly. We leverage this algorithm to evaluate the tightness of competing Lipschitz estimators and the effects of regularized training on the Lipschitz constant.

Journal ArticleDOI
TL;DR: A novel methodology for predicting the time evolution of the number of individuals in a given country reported to be infected with SARS-CoV-2 is introduced and the significance of these results for evaluating the impact of easing the lockdown measures is discussed.
Abstract: We introduce a novel methodology for predicting the time evolution of the number of individuals in a given country reported to be infected with SARS-CoV-2. This methodology, which is based on the synergy of explicit mathematical formulae and deep learning networks, yields algorithms whose input is only the existing data in the given country of the accumulative number of individuals who are reported to be infected. The analytical formulae involve several constant parameters that were determined from the available data using an error-minimizing algorithm. The same data were also used for the training of a bidirectional long short-term memory network. We applied the above methodology to the epidemics in Italy, Spain, France, Germany, USA and Sweden. The significance of these results for evaluating the impact of easing the lockdown measures is discussed.

Journal ArticleDOI
TL;DR: This work develops and analyzes a method to reduce the size of a very large set of data points in a high-dimensional Euclidean space to a small set of weighted points such that the result of the sum of the points is small.
Abstract: We develop and analyze a method to reduce the size of a very large set of data points in a high-dimensional Euclidean space $\mathbb{R}^d$ to a small set of weighted points such that the result of

Journal ArticleDOI
TL;DR: In this article, a dual relation between the minimum temperature of a virtual black hole and the Hawking-Page phase transition temperature in $d$ dimensions was found, and the normalized Ruppeiner scalar curvature was shown to be a universal constant at the transition point.
Abstract: Universal relations and constants have important applications in understanding a physical theory In this article, we explore this issue for Hawking-Page phase transitions in Schwarzschild anti--de Sitter black holes We find a novel exact dual relation between the minimum temperature of the ($d+1$)-dimensional black hole and the Hawking-Page phase transition temperature in $d$ dimensions, reminiscent of the holographic principle Furthermore, we find that the normalized Ruppeiner scalar curvature is a universal constant at the Hawking-Page transition point Since the Ruppeiner curvature can be treated as an indicator of the intensity of the interactions amongst black hole microstructures, we conjecture that this universal constant denotes an interaction threshold, beyond which a virtual black hole becomes a real one This new dual relation and universal constant are fundamental in understanding Hawking-Page phase transitions, and might have new important applications in the black hole physics in the near future

Journal ArticleDOI
TL;DR: This paper provides a characterization of which shapes can be formed deterministically starting from any simply connected initial configuration of n particles, and provides a universal shape formation algorithm that works without chirality, proving thatChirality is computationally irrelevant for shape formation.
Abstract: Shape formation (or pattern formation) is a basic distributed problem for systems of computational mobile entities. Intensively studied for systems of autonomous mobile robots, it has recently been investigated in the realm of programmable matter, where entities are assumed to be small and with severely limited capabilities. Namely, it has been studied in the geometric Amoebot model, where the anonymous entities, called particles, operate on a hexagonal tessellation of the plane and have limited computational power (they have constant memory), strictly local interaction and communication capabilities (only with particles in neighboring nodes of the grid), and limited motorial capabilities (from a grid node to an empty neighboring node); their activation is controlled by an adversarial scheduler. Recent investigations have shown how, starting from a well-structured configuration in which the particles form a (not necessarily complete) triangle, the particles can form a large class of shapes. This result has been established under several assumptions: agreement on the clockwise direction (i.e., chirality), a sequential activation schedule, and randomization (i.e., particles can flip coins to elect a leader). In this paper we obtain several results that, among other things, provide a characterization of which shapes can be formed deterministically starting from any simply connected initial configuration of n particles. The characterization is constructive: we provide a universal shape formation algorithm that, for each feasible pair of shapes $$(S_0, S_F)$$, allows the particles to form the final shape $$S_F$$ (given in input) starting from the initial shape $$S_0$$, unknown to the particles. The final configuration will be an appropriate scaled-up copy of $$S_F$$ depending on n. If randomization is allowed, then any input shape can be formed from any initial (simply connected) shape by our algorithm, provided that there are enough particles. Our algorithm works without chirality, proving that chirality is computationally irrelevant for shape formation. Furthermore, it works under a strong adversarial scheduler, not necessarily sequential. We also consider the complexity of shape formation both in terms of the number of rounds and the total number of moves performed by the particles executing a universal shape formation algorithm. We prove that our solution has a complexity of $$O(n^2)$$ rounds and moves: this number of moves is also asymptotically worst-case optimal.

Journal ArticleDOI
TL;DR: In this article, a double phase problem driven by the sum of the p-Laplace operator and a weighted q-Laplacian with a weight function which is not bounded away from zero was considered.
Abstract: We consider a double phase problem driven by the sum of the p-Laplace operator and a weighted q-Laplacian ($$q

Posted Content
TL;DR: This work significantly expands the understanding of last-iterate convergence for OGDA and OMWU in the constrained setting and introduces a sufficient condition under which OGDA exhibits concrete last- iterate convergence rates with a constant learning rate, which holds for strongly-convex-strongly-concave functions.
Abstract: Optimistic Gradient Descent Ascent (OGDA) and Optimistic Multiplicative Weights Update (OMWU) for saddle-point optimization have received growing attention due to their favorable last-iterate convergence. However, their behaviors for simple bilinear games over the probability simplex are still not fully understood - previous analysis lacks explicit convergence rates, only applies to an exponentially small learning rate, or requires additional assumptions such as the uniqueness of the optimal solution. In this work, we significantly expand the understanding of last-iterate convergence for OGDA and OMWU in the constrained setting. Specifically, for OMWU in bilinear games over the simplex, we show that when the equilibrium is unique, linear last-iterate convergence is achieved with a learning rate whose value is set to a universal constant, improving the result of (Daskalakis & Panageas, 2019b) under the same assumption. We then significantly extend the results to more general objectives and feasible sets for the projected OGDA algorithm, by introducing a sufficient condition under which OGDA exhibits concrete last-iterate convergence rates with a constant learning rate whose value only depends on the smoothness of the objective function. We show that bilinear games over any polytope satisfy this condition and OGDA converges exponentially fast even without the unique equilibrium assumption. Our condition also holds for strongly-convex-strongly-concave functions, recovering the result of (Hsieh et al., 2019). Finally, we provide experimental results to further support our theory.

Journal ArticleDOI
TL;DR: The proposed method can estimate the inertia constant under normal operating conditions, and therefore, can provide the tracking trajectory of the power system inertia constant in real-time and is validated in the IEEE 39-bus system.
Abstract: An online estimation method for the power system inertia constant under normal operating conditions is proposed. First of all, a dynamic model relating the active power to the bus frequency at the generation node is identified in the frequency domain using ambient data measured with the phasor measurement units (PMUs). Then, the inertia constant at the generation node is extracted from the unit step response of the identified model in the time domain using the swing equation. Finally, with the sliding window method and the exponential smoothing method, the estimated inertia constant is updated in real-time. Compared to the conventional methods using large disturbance data or field test data, the proposed method can estimate the inertia constant under normal operating conditions, and therefore, can provide the tracking trajectory of the power system inertia constant in real-time. The effectiveness of the proposed method is validated in the IEEE 39-bus system. The results show that the relative error of the identified inertia constant is below 5% and the identified inertia constant can be updated within 1s.

Journal ArticleDOI
TL;DR: In this article, the authors considered the chemotaxis growth system under homogeneous Neumann boundary conditions in smoothly bounded domains and showed that, under an appropriate smallness assumption on χ, any such solution at least asymptotically exhibits relaxation by approaching the nontrivial spatially homogeneous steady state.
Abstract: Abstract The chemotaxis-growth system ($\\star$){ut=D⁢Δ⁢u-χ⁢∇⋅(u⁢∇⁡v)+ρ⁢u-μ⁢uα,vt=d⁢Δ⁢v-κ⁢v+λ⁢u{}\\left\\{\\begin{aligned} \\displaystyle{}u_{t}&\\displaystyle=D\\Delta u-\\chi% \ abla\\cdot(u\ abla v)+\\rho u-\\mu u^{\\alpha},\\\\ \\displaystyle v_{t}&\\displaystyle=d\\Delta v-\\kappa v+\\lambda u\\end{aligned}\\right. is considered under homogeneous Neumann boundary conditions in smoothly bounded domains Ω⊂ℝn{\\Omega\\subset\\mathbb{R}^{n}}, n≥1{n\\geq 1}. For any choice of α>1{\\alpha>1}, the literature provides a comprehensive result on global existence for widely arbitrary initial data within a suitably generalized solution concept, but the regularity properties of such solutions may be rather poor, as indicated by precedent results on the occurrence of finite-time blow-up in corresponding parabolic-elliptic simplifications. Based on the analysis of a certain eventual Lyapunov-type feature of ($\\star$), the present work shows that, whenever α≥2-2n{\\alpha\\geq 2-\\frac{2}{n}}, under an appropriate smallness assumption on χ, any such solution at least asymptotically exhibits relaxation by approaching the nontrivial spatially homogeneous steady state ((ρμ)1α-1,λκ⁢(ρμ)1α-1){\\bigl{(}\\bigl{(}\\frac{\\rho}{\\mu}\\bigr{)}^{\\frac{1}{\\alpha-1}},\\frac{\\lambda}{% \\kappa}\\bigl{(}\\frac{\\rho}{\\mu}\\bigr{)}^{\\frac{1}{\\alpha-1}}\\bigr{)}} in the large time limit.

Posted Content
TL;DR: This conjecture implies that overparametrization is necessary for robustness, since it means that one needs roughly one neuron per datapoint to ensure a $O(1)$-Lipschitz network, while mere data fitting of $d-dimensional data requires only one neurons per $d$datapoints.
Abstract: We initiate the study of the inherent tradeoffs between the size of a neural network and its robustness, as measured by its Lipschitz constant. We make a precise conjecture that, for any Lipschitz activation function and for most datasets, any two-layers neural network with $k$ neurons that perfectly fit the data must have its Lipschitz constant larger (up to a constant) than $\sqrt{n/k}$ where $n$ is the number of datapoints. In particular, this conjecture implies that overparametrization is necessary for robustness, since it means that one needs roughly one neuron per datapoint to ensure a $O(1)$-Lipschitz network, while mere data fitting of $d$-dimensional data requires only one neuron per $d$ datapoints. We prove a weaker version of this conjecture when the Lipschitz constant is replaced by an upper bound on it based on the spectral norm of the weight matrix. We also prove the conjecture in the high-dimensional regime $n \approx d$ (which we also refer to as the undercomplete case, since only $k \leq d$ is relevant here). Finally we prove the conjecture for polynomial activation functions of degree $p$ when $n \approx d^p$. We complement these findings with experimental evidence supporting the conjecture.

Journal ArticleDOI
TL;DR: In this paper, the authors present results from an experimental evaluation on the pre-and postbuckling behavior of 12 steel wide-flange cantilever columns under axial load and lateral drift demands.
Abstract: This paper presents results from an experimental evaluation on the pre- and post-buckling behavior of 12 steel wide-flange cantilever columns under axial load and lateral drift demands. The...

Journal ArticleDOI
TL;DR: This paper deals with constant modulus waveform design in spectrally dense environments assuming a discrete phase code alphabet and an iterative procedure characterized by a polynomial computational complexity is introduced leveraging the coordinate descent method.
Abstract: This paper deals with constant modulus waveform design in spectrally dense environments assuming a discrete phase code alphabet. The goal is to optimize the radar detection performance while rigorously controlling the injected interference energy within each shared band and enforcing a similarity constraint to manage some relevant signal features. To tackle the resulting NP-hard optimization problem, an iterative procedure characterized by a polynomial computational complexity, is introduced leveraging the coordinate descent method. Numerical results are provided to show the effectiveness of the technique in terms of detection performance, spectral shape and autocorrelation features.

Journal ArticleDOI
TL;DR: In this paper, the authors derived an evolution equation for the symmetric part of the gradient of the velocity (the strain tensor) in the incompressible Navier-Stokes equation and proved the existence of mild solutions to this equation.
Abstract: This manuscript derives an evolution equation for the symmetric part of the gradient of the velocity (the strain tensor) in the incompressible Navier–Stokes equation on $${\mathbb {R}}^3$$ , and proves the existence of $$L^2$$ mild solutions to this equation. We use this equation to obtain a simplified identity for the growth of enstrophy for mild solutions that depends only on the strain tensor, not on the nonlocal interaction of the strain tensor with the vorticity. The resulting identity allows us to prove a new family of scale-critical, necessary and sufficient conditions for the blow-up of a solution at some finite time $$T_{max}<+\infty $$ , which depend only on the history of the positive part of the second eigenvalue of the strain matrix. Since this matrix is trace-free, this severely restricts the geometry of any finite-time blow-up. This regularity criterion provides analytic evidence of the numerically observed tendency of the vorticity to align with the eigenvector corresponding to the middle eigenvalue of the strain matrix. This regularity criterion also allows us to prove as a corollary a new scale critical, one component type, regularity criterion for a range of exponents for which there were previously no known critical, one component type regularity criteria. Furthermore, our analysis permits us to extend the known time of existence of smooth solutions with fixed initial enstrophy $$E_0=\frac{1}{2}\left\| abla \otimes u^0\right\| _{L^2}^2$$ by a factor of 4920.75—although the previous constant in the literature was not expected to be close to optimal, so this improvement is less drastic than it sounds, especially compared with numerical results. Finally, we will prove the existence and stability of blow-up for a toy model ODE for the strain equation.

Journal ArticleDOI
TL;DR: In this paper, Balthazar, Rodriguez and Yin found some remarkable agreement between the results of c = 1 matrix model and D-instanton corrections in two dimensional string theory.
Abstract: In a recent paper, Balthazar, Rodriguez and Yin found some remarkable agreement between the results of c = 1 matrix model and D-instanton corrections in two dimensional string theory. Their analysis left undetermined two constants in the string theory computation which had to be fixed by comparing the results with the matrix model results. One of these constants is affected by possible renormalization of the D-instanton action that needs to be computed separately. In this paper we fix the other constant by reformulating the world-sheet analysis in the language of string field theory.

Journal ArticleDOI
TL;DR: This work uses Neural Networks to solve approximatively first-order single-delay differential equations and systems and the resulted continuous solutions prove to be very efficient.
Abstract: Following the ideas of Lagaris et al. (IEEE Trans Neural Netw 9(5):987–1000, 1998), we use Neural Networks to solve approximatively first-order single-delay differential equations and systems. We apply the proposed novel methodology to various problems with constant delay terms and the resulted continuous solutions prove to be very efficient. This is the case not only for nonstiff problems but for equations with stiffness too.

Journal ArticleDOI
TL;DR: Sliding-mode-based differentiation of the input f ( t) yields exact estimations of the derivatives f, …, f ( n ) , provided an upper bound L ( t ) of |f ( n + 1 ) ( t | is available in real-time.

Journal ArticleDOI
TL;DR: In this paper, a new version of the restricted three-body problem is considered and the equations of motion of the infinitesimal body are constructed, and the permissible and forbidden regions of motion are also analyzed in both planes, with respect to relative small, middle and large values of the Jacobian constant.
Abstract: In this paper, a new version of the restricted three-body problem is considered. That is the spatial quantized restricted three-body problem. The equations of motion of the infinitesimal body are constructed. The equilibria points in plane of primaries motion and out of plane are explored under the quantized gravitational potential effect. The permissible and forbidden regions of motion are also analyzed in both planes, with respect to relative small, middle and large values of the Jacobian constant. We observe that the size of permissible (forbidden) regions of motion are decreasing (increasing) with increasing the Jacobian constant values and vice versa. We demonstrate that the permissible regions of motion shrink with increasing the Jacobian constant value, and the infinitesimal body will be compelled to move in a surrounding region of one the primaries. It will has no free to switch or exchange its motion from one to other. Furthermore, we emphasize that, in spit of quantum corrections are tiny small effect, there is no warranty that the dynamical system properties are unaffected. Because there are changes in the locations of equilibria points and the regions of permissible motion, even which are very small. We think that these corrections may have considerable impact and can be tested within frame of small distances. Substantially, we are determine the parametric evolution of the equilibrium points of the system, along with the energetically allowed and forbidden regions of motion.

Journal ArticleDOI
TL;DR: In this article, a fixed focus on receiver maintains constant temperature of receiver and it is achieved by Scheffler reflector, which is a best example of solar energy application used for medium-temperature applicat...
Abstract: A fixed focus on receiver maintains constant temperature of receiver and it is achieved by Scheffler reflector. It is a best example of solar energy application used for medium-temperature applicat...

Journal ArticleDOI
TL;DR: This article exploits the geometrical structure of CMC to solve the nonconvex design problem via a tractable method called iterative beampattern with spectral design (IBS), and develops and solves a sequence of convex problems such that constant modulus is achieved at convergence.
Abstract: In this article, we propose a new algorithm that designs a transmit beampattern for multiple-input multiple-output (MIMO) radar considering coexistence with other wireless systems This design process is conducted by minimizing the deviation of the generated beampattern (which in turn is a function of the transmit waveform) against an idealized one while enforcing the waveform elements to be constant modulus and in the presence of spectral restrictions This leads to a hard nonconvex optimization problem primarily due to the presence of the constant modulus constraint (CMC) In this article, we exploit the geometrical structure of CMC, ie, we redefine this constraint as an intersection of two sets (one convex and other nonconvex) This new perspective allows us to solve the nonconvex design problem via a tractable method called iterative beampattern with spectral design (IBS) In particular, the proposed IBS algorithm develops and solves a sequence of convex problems such that constant modulus is achieved at convergence Crucially, we show that at convergence the obtained solution satisfies the Karush–Kuhn–Tucker conditions of the aforementioned nonconvex problem Finally, we evaluate the proposed algorithm over challenging simulated scenarios, and show that it outperforms the state-of-the-art competing methods

Journal ArticleDOI
TL;DR: The history of the G measurement is briefly reviewed, and eleven values of G adopted in CODATA 2014 after 2000 are introduced and the latest two values published in 2018 are introduced using two independent methods.
Abstract: The Newtonian gravitational constant G, which is one of the most important fundamental physical constants in nature, plays a significant role in the fields of theoretical physics, geophysics, astrophysics and astronomy. Although G was the first physical constant to be introduced in the history of science, it is considered to be one of the most difficult to measure accurately so far. Over the past two decades, eleven precision measurements of the gravitational constant have been performed, and the latest recommended value for G published by the Committee on Data for Science and Technology (CODATA) is (6.674 08 ± 0.000 31) × 10^−11 m^3 kg^−1 s^−2 with a relative uncertainty of 47 parts per million. This uncertainty is the smallest compared with previous CODATA recommended values of G; however, it remains a relatively large uncertainty among other fundamental physical constants. In this paper we briefly review the history of the G measurement, and introduce eleven values of G adopted in CODATA 2014 after 2000 and our latest two values published in 2018 using two independent methods.