scispace - formally typeset
Search or ask a question

Showing papers on "Piecewise published in 2019"


Posted Content
TL;DR: Finite element methods for approximating the time harmonic Maxwell equations and Discontinuous Galerkin methods are surveyed, focusing on comparing error estimates for problems with spatially varying coefficients.
Abstract: We survey finite element methods for approximating the time harmonic Maxwell equations. We concentrate on comparing error estimates for problems with spatially varying coefficients. For the conforming edge finite element methods, such estimates allow, at least, piecewise smooth coefficients. But for Discontinuous Galerkin (DG) methods, the state of the art of error analysis is less advanced (we consider three DG families of methods: Interior Penalty type, Hybridizable DG, and Trefftz type methods). Nevertheless, DG methods offer significant potential advantages compared to conforming methods.

473 citations


Journal ArticleDOI
05 Aug 2019-Chaos
TL;DR: The proposed new variable-order fractional chaotic systems improves security of the image encryption and saves the encryption time greatly.
Abstract: New variable-order fractional chaotic systems are proposed in this paper. A concept of short memory is introduced where the initial point in the Caputo derivative is varied. The fractional order is defined by the use of a piecewise constant function which leads to rich chaotic dynamics. The predictor-corrector method is adopted, and numerical solutions of fractional delay equations are obtained. Then, this concept is extended to fractional difference equations, and generalized chaotic behaviors are discussed numerically. Finally, the new fractional chaotic models are applied to block image encryption and each block has a different fractional order. The new chaotic system improves security of the image encryption and saves the encryption time greatly.

180 citations


Posted Content
TL;DR: In this paper, the authors consider the problem of learning a two-linear-layer network with binarized ReLU activation and Gaussian input data and show that a poor choice of STE leads to instability of the training algorithm near certain local minima.
Abstract: Training activation quantized neural networks involves minimizing a piecewise constant function whose gradient vanishes almost everywhere, which is undesirable for the standard back-propagation or chain rule. An empirical way around this issue is to use a straight-through estimator (STE) (Bengio et al., 2013) in the backward pass only, so that the "gradient" through the modified chain rule becomes non-trivial. Since this unusual "gradient" is certainly not the gradient of loss function, the following question arises: why searching in its negative direction minimizes the training loss? In this paper, we provide the theoretical justification of the concept of STE by answering this question. We consider the problem of learning a two-linear-layer network with binarized ReLU activation and Gaussian input data. We shall refer to the unusual "gradient" given by the STE-modifed chain rule as coarse gradient. The choice of STE is not unique. We prove that if the STE is properly chosen, the expected coarse gradient correlates positively with the population gradient (not available for the training), and its negation is a descent direction for minimizing the population loss. We further show the associated coarse gradient descent algorithm converges to a critical point of the population loss minimization problem. Moreover, we show that a poor choice of STE leads to instability of the training algorithm near certain local minima, which is verified with CIFAR-10 experiments.

167 citations


Posted Content
TL;DR: The main results of this article prove that neural networks possess even greater approximation power than these traditional methods of nonlinear approximation, and exhibiting large classes of functions which can be efficiently captured by neural networks where classical nonlinear methods fall short of the task.
Abstract: This article is concerned with the approximation and expressive powers of deep neural networks. This is an active research area currently producing many interesting papers. The results most commonly found in the literature prove that neural networks approximate functions with classical smoothness to the same accuracy as classical linear methods of approximation, e.g. approximation by polynomials or by piecewise polynomials on prescribed partitions. However, approximation by neural networks depending on n parameters is a form of nonlinear approximation and as such should be compared with other nonlinear methods such as variable knot splines or n-term approximation from dictionaries. The performance of neural networks in targeted applications such as machine learning indicate that they actually possess even greater approximation power than these traditional methods of nonlinear approximation. The main results of this article prove that this is indeed the case. This is done by exhibiting large classes of functions which can be efficiently captured by neural networks where classical nonlinear methods fall short of the task. The present article purposefully limits itself to studying the approximation of univariate functions by ReLU networks. Many generalizations to functions of several variables and other activation functions can be envisioned. However, even in this simplest of settings considered here, a theory that completely quantifies the approximation power of neural networks is still lacking.

140 citations


Journal ArticleDOI
Yi Fang1, Jie Hu1, Wenhai Liu1, Quanquan Shao1, Jin Qi1, Yinghong Peng1 
TL;DR: A smooth and time-optimal S-curve trajectory planning method is proposed to meet the requirements of high-speed and ultra-precision operation for robotic manipulators in modern industrial applications by utilizing a piecewise sigmoid function to establish a jerk profile with suitably chosen phase durations.

105 citations


Journal ArticleDOI
TL;DR: In this article, the authors propose an end-to-end trainable deep network, which is inspired by the state-of-the-art fine-grained recognition model and is tailored for the FSFG task.
Abstract: Humans are capable of learning a new fine-grained concept with very little supervision, e.g., few exemplary images for a species of bird, yet our best deep learning systems need hundreds or thousands of labeled examples. In this paper, we try to reduce this gap by studying the fine-grained image recognition problem in a challenging few-shot learning setting, termed few-shot fine-grained recognition (FSFG). The task of FSFG requires the learning systems to build classifiers for the novel fine-grained categories from few examples (only one or less than five). To solve this problem, we propose an end-to-end trainable deep network, which is inspired by the state-of-the-art fine-grained recognition model and is tailored for the FSFG task. Specifically, our network consists of a bilinear feature learning module and a classifier mapping module: while the former encodes the discriminative information of an exemplar image into a feature vector, the latter maps the intermediate feature into the decision boundary of the novel category. The key novelty of our model is a “piecewise mappings” function in the classifier mapping module, which generates the decision boundary via learning a set of more attainable sub-classifiers in a more parameter-economic way. We learn the exemplar-to-classifier mapping based on an auxiliary dataset in a meta-learning fashion, which is expected to be able to generalize to novel categories. By conducting comprehensive experiments on three fine-grained datasets, we demonstrate that the proposed method achieves superior performance over the competing baselines.

87 citations


Journal ArticleDOI
TL;DR: In this paper, the least squares regression function estimator over the class of real-valued functions on $[0, 1]^{d}$ that are increasing in each coordinate was studied and it was shown that the estimator achieves the minimax rate of order $n^{-\min\{2/(d+2),1/d\}}$ in the empirical loss, up to polylogarithmic factors.
Abstract: We study the least squares regression function estimator over the class of real-valued functions on $[0,1]^{d}$ that are increasing in each coordinate. For uniformly bounded signals and with a fixed, cubic lattice design, we establish that the estimator achieves the minimax rate of order $n^{-\min\{2/(d+2),1/d\}}$ in the empirical $L_{2}$ loss, up to polylogarithmic factors. Further, we prove a sharp oracle inequality, which reveals in particular that when the true regression function is piecewise constant on $k$ hyperrectangles, the least squares estimator enjoys a faster, adaptive rate of convergence of $(k/n)^{\min(1,2/d)}$, again up to polylogarithmic factors. Previous results are confined to the case $d\leq2$. Finally, we establish corresponding bounds (which are new even in the case $d=2$) in the more challenging random design setting. There are two surprising features of these results: first, they demonstrate that it is possible for a global empirical risk minimisation procedure to be rate optimal up to polylogarithmic factors even when the corresponding entropy integral for the function class diverges rapidly; second, they indicate that the adaptation rate for shape-constrained estimators can be strictly worse than the parametric rate.

87 citations


Proceedings ArticleDOI
14 Apr 2019
TL;DR: This work extends the previously proposed formulation for the dynamics of a soft robot from two to three dimensions, and introduces a control architecture with the aim of achieving accurate curvature and bending control.
Abstract: Despite the emergence of many soft-bodied robotic systems, model-based feedback control for soft robots has remained an open challenge. This is largely due to the intrinsic difficulties in designing controllers for systems with infinite dimensions. This work extends our previously proposed formulation for the dynamics of a soft robot from two to three dimensions. The formulation connects the soft robot's dynamic behavior to a rigid-bodied robot with parallel elastic actuation. The matching between the two systems is exact under the hypothesis of Piecewise Constant Curvature. Based on this connection, we introduce a control architecture with the aim of achieving accurate curvature and bending control. This controller accounts for the natural softness of the system moving in three dimensions, and for the dynamic forces acting on the system. The controller is validated in a realistic simulation, together with a kinematic inversion algorithm. The paper also introduces a soft robot capable of three-dimensional motion, that we use to experimentally validate our control strategy.

86 citations


Journal ArticleDOI
TL;DR: An optimal procedure for the economic schedule of a network of interconnected microgrids with hybrid energy storage system is carried out through a control algorithm based on distributed model predictive control (DMPC), specifically designed according to the criterion of improving the cost function of each microgrid acting as a single system through the network mode operation.
Abstract: In this paper, an optimal procedure for the economic schedule of a network of interconnected microgrids with hybrid energy storage system is carried out through a control algorithm based on distributed model predictive control (DMPC). The algorithm is specifically designed according to the criterion of improving the cost function of each microgrid acting as a single system through the network mode operation. The algorithm allows maximum economical benefit of the microgrids, minimizing the degradation causes of each storage system, and fulfilling the different system constraints. In order to capture both continuous/discrete dynamics and switching between different operating conditions, the plant is modeled with the framework of mixed logic dynamic. The DMPC problem is solved with the use of mixed integer linear programming using a piecewise formulation, in order to linearize a mixed integer quadratic programming problem.

84 citations


Journal ArticleDOI
TL;DR: In this paper, the authors propose a generic and flexible methodology for non-parametric function estimation, in which they first estimate the number and locations of any features that may be present in the function and then estimate the function parametrically between each pair of neighbouring detected features.
Abstract: We propose a new, generic and flexible methodology for non-parametric function estimation, in which we first estimate the number and locations of any features that may be present in the function and then estimate the function parametrically between each pair of neighbouring detected features. Examples of features handled by our methodology include change points in the piecewise constant signal model, kinks in the piecewise linear signal model and other similar irregularities, which we also refer to as generalized change points. Our methodology works with only minor modifications across a range of generalized change point scenarios, and we achieve such a high degree of generality by proposing and using a new multiple generalized change point detection device, termed narrowest-over-threshold (NOT) detection. The key ingredient of the NOT method is its focus on the smallest local sections of the data on which the existence of a feature is suspected. For selected scenarios, we show the consistency and near optimality of the NOT algorithm in detecting the number and locations of generalized change points. The NOT estimators are easy to implement and rapid to compute. Importantly, the NOT approach is easy to extend by the user to tailor to their own needs. Our methodology is implemented in the R package not.

83 citations


Journal ArticleDOI
TL;DR: In this article, the authors apply machine learning techniques to the problem of computing line bundle cohomologies of (hypersurfaces in) toric varieties, which depend in a piecewise polynomial way on the line bundle charges.

Journal ArticleDOI
TL;DR: In this article, the authors studied scalar advanced and delayed differential equations with piecewise constant generalized arguments, in short DEPCAG of mixed type, that is, the arguments are general step functions.
Abstract: We study scalar advanced and delayed differential equations with piecewise constant generalized arguments, in short DEPCAG of mixed type, that is, the arguments are general step functions. It is shown that the argument deviation generates, under certain conditions, oscillations of the solutions, which is an impossible phenomenon for the corresponding equation without the argument deviations. Criteria for existence of periodic solutions of such equations are discussed. New criteria extend and improve related results reported in the literature. The efficiency of our criteria is illustrated via several numerical examples and simulations.

Journal ArticleDOI
TL;DR: Triangularly Preconditioned primal-dual algorithm for minimizing the sum of a Lipschitz-differentiable convex function and two possibly nonsmooth convex functions, one of which is composed with a linear mapping is proposed in this article.
Abstract: This paper proposes Triangularly Preconditioned Primal- Dual algorithm, a new primal-dual algorithm for minimizing the sum of a Lipschitz-differentiable convex function and two possibly nonsmooth convex functions, one of which is composed with a linear mapping. We devise a randomized block-coordinate (BC) version of the algorithm which converges under the same stepsize conditions as the full algorithm. It is shown that both the original as well as the BC scheme feature linear convergence rate when the functions involved are either piecewise linear-quadratic, or when they satisfy a certain quadratic growth condition (which is weaker than strong convexity). Moreover, we apply the developed algorithms to the problem of multiagent optimization on a graph, thus obtaining novel synchronous and asynchronous distributed methods. The proposed algorithms are fully distributed in the sense that the updates and the stepsizes of each agent only depend on local information. In fact, no prior global coordination is required. Finally, we showcase an application of our algorithm in distributed formation control.

Journal ArticleDOI
TL;DR: This paper introduces a transportation resource sharing strategy to address the multi-depot green vehicle routing problem, and incorporates the time-dependency of speed as well as piecewise penalty costs for earliness and tardiness of deliveries.

Posted Content
TL;DR: A scheme for discretization of continuous-time data is proposed by considering the quantiles of the estimated event-time distribution, and, for smaller data sets, it is found to be preferable over the commonly used equidistant scheme.
Abstract: Application of discrete-time survival methods for continuous-time survival prediction is considered. For this purpose, a scheme for discretization of continuous-time data is proposed by considering the quantiles of the estimated event-time distribution, and, for smaller data sets, it is found to be preferable over the commonly used equidistant scheme. Furthermore, two interpolation schemes for continuous-time survival estimates are explored, both of which are shown to yield improved performance compared to the discrete-time estimates. The survival methods considered are based on the likelihood for right-censored survival data, and parameterize either the probability mass function (PMF) or the discrete-time hazard rate, both with neural networks. Through simulations and study of real-world data, the hazard rate parametrization is found to perform slightly better than the parametrization of the PMF. Inspired by these investigations, a continuous-time method is proposed by assuming that the continuous-time hazard rate is piecewise constant. The method, named PC-Hazard, is found to be highly competitive with the aforementioned methods in addition to other methods for survival prediction found in the literature.

Journal ArticleDOI
TL;DR: In this article, the authors proposed three different distributed event-triggered control algorithms to achieve leader-follower consensus for a network of Euler-Lagrange agents.
Abstract: This paper proposes three different distributed event-triggered control algorithms to achieve leader–follower consensus for a network of Euler–Lagrange agents. We first propose two model-independent algorithms for a subclass of Euler–Lagrange agents without the vector of gravitational potential forces. By model-independent, we mean that each agent can execute its algorithm with no knowledge of the agent self-dynamics. A variable-gain algorithm is employed when the sensing graph is undirected; algorithm parameters are selected in a fully distributed manner with much greater flexibility compared to all previous work studying event-triggered consensus problems. When the sensing graph is directed, a constant-gain algorithm is employed. The control gains must be centrally designed to exceed several lower bounding inequalities, which require limited knowledge of bounds on the matrices describing the agent dynamics, bounds on network topology information, and bounds on the initial conditions. When the Euler–Lagrange agents have dynamics that include the vector of gravitational potential forces, an adaptive algorithm is proposed. This requires more information about the agent dynamics but allows for the estimation of uncertain parameters associated with the agent self-dynamics. For each algorithm, a trigger function is proposed to govern the event update times. The controller is only updated at each event, which ensures that the control input is piecewise constant and thus saves energy resources. We analyze each controller and trigger function to exclude Zeno behavior.

Journal ArticleDOI
TL;DR: The key feature of the proposed PLS-based difference imaging approach is that the conductivity change to be reconstructed is assumed to be piecewise constant, while the geometry of the anomaly is represented by a shape-based PLS function employing Gaussian radial basis functions (GRBFs).
Abstract: This paper presents a novel difference imaging approach based on the recently developed parametric level set (PLS) method for estimating the change in a target conductivity from electrical impedance tomography measurements. As in conventional difference imaging, the reconstruction of conductivity change is based on data sets measured from the surface of a body before and after the change. The key feature of the proposed approach is that the conductivity change to be reconstructed is assumed to be piecewise constant, while the geometry of the anomaly is represented by a shape-based PLS function employing Gaussian radial basis functions (GRBFs). The representation of the PLS function by using GRBF provides flexibility in describing a large class of shapes with fewer unknowns. This feature is advantageous, as it may significantly reduce the overall number of unknowns, improve the condition number of the inverse problem, and enhance the computational efficiency of the technique. To evaluate the proposed PLS-based difference imaging approach, results obtained via simulation, phantom study, and in vivo pig data are studied. We find that the proposed approach tolerates more modeling errors and leads to a significant improvement in image quality compared with the conventional linear approach.

Journal ArticleDOI
TL;DR: The exponential stability of piecewise pseudo almost periodic solutions for neutral-type inertial neural networks with time-varying, infinite-time distributed delays (mixed delays) and impulses are investigated and some sufficient conditions are presented.

Posted Content
TL;DR: It is proved that using both sine and ReLU activations theoretically leads to very fast, nearly exponential approximation rates, thanks to the emerging capability of the network to implement efficient lookup operations.
Abstract: We explore the phase diagram of approximation rates for deep neural networks and prove several new theoretical results. In particular, we generalize the existing result on the existence of deep discontinuous phase in ReLU networks to functional classes of arbitrary positive smoothness, and identify the boundary between the feasible and infeasible rates. Moreover, we show that all networks with a piecewise polynomial activation function have the same phase diagram. Next, we demonstrate that standard fully-connected architectures with a fixed width independent of smoothness can adapt to smoothness and achieve almost optimal rates. Finally, we consider deep networks with periodic activations ("deep Fourier expansion") and prove that they have very fast, nearly exponential approximation rates, thanks to the emerging capability of the network to implement efficient lookup operations.

Journal ArticleDOI
TL;DR: It is shown that even linear rates are expected for Bregman projections with respect to smooth or piecewise linear-quadratic functions, and also the regularized nuclear norm, which is used in the area of low rank matrix problems.
Abstract: The randomized version of the Kaczmarz method for the solution of consistent linear systems is known to converge linearly in expectation. And even in the possibly inconsistent case, when only noisy data is given, the iterates are expected to reach an error threshold in the order of the noise-level with the same rate as in the noiseless case. In this work we show that the same also holds for the iterates of the recently proposed randomized sparse Kaczmarz method for recovery of sparse solutions. Furthermore we consider the more general setting of convex feasibility problems and their solution by the method of randomized Bregman projections. This is motivated by the observation that, similarly to the Kaczmarz method, the Sparse Kaczmarz method can also be interpreted as an iterative Bregman projection method to solve a convex feasibility problem. We obtain expected sublinear rates for Bregman projections with respect to a general strongly convex function. Moreover, even linear rates are expected for Bregman projections with respect to smooth or piecewise linear-quadratic functions, and also the regularized nuclear norm, which is used in the area of low rank matrix problems.

Journal ArticleDOI
TL;DR: Sufficient conditions for the heterogeneous multi-agent systems with multiple leaders and switching directed topologies to achieve the desired time-varying output formation tracking under the designed protocol are proposed and simulation examples are given to validate the theoretical results.
Abstract: This paper studies the time-varying output formation tracking problems for heterogeneous linear multi-agent systems with multiple leaders in the presence of switching directed topologies, where the agents can have different system dynamics and state dimensions. The outputs of followers are required to accomplish a given time-varying formation configuration and track the convex combination of leaders’ outputs simultaneously. Firstly, using the neighboring relative information, a distributed observer is constructed for each follower to estimate the convex combination of multiple leaders’ states under the influences of switching directed topologies. The convergence of the observer is proved based on the piecewise Lyapunov theory and the threshold for the average dwell time of the switching topologies is derived. Then, an output formation tracking protocol based on the distributed observer and an algorithm to determine the control parameters of the protocol are presented. Considering the features of heterogeneous dynamics, the time-varying formation tracking feasible constraints are provided, and a compensation input is applied to expand the feasible formation set. Sufficient conditions for the heterogeneous multi-agent systems with multiple leaders and switching directed topologies to achieve the desired time-varying output formation tracking under the designed protocol are proposed. Finally, simulation examples are given to validate the theoretical results.

Journal ArticleDOI
TL;DR: In this article, a linear quadratic mean field game problem with integral control in the cost coefficients is formulated, and a suitable Banach space is introduced to establish the existence of Nash equilibria for the corresponding infinite population game.

Journal ArticleDOI
TL;DR: This work uses a piecewise time function with different parameters at different time periods to more accurately model the real system and adopts L2 regularization is adopted to solve the overfitting problem and enhance the generalization performance.

Journal ArticleDOI
TL;DR: This paper proposes to combine the internal smoothness prior and external gradient consistency constraint in graph domain for depth super-resolution, and reinterprets the gradient thresholding model as variational optimization with sparsity constraint.
Abstract: Depth information is being widely used in many real-world applications. However, due to the limitation of depth sensing technology, the captured depth map in practice usually has much lower resolution than that of color image counterpart. In this paper, we propose to combine the internal smoothness prior and external gradient consistency constraint in graph domain for depth super-resolution. On one hand, a new graph Laplacian regularizer is proposed to preserve the inherent piecewise smooth characteristic of depth, which has desirable filtering properties. A specific weight matrix of the respect graph is defined to make full use of information of both depth and the corresponding guidance image. On the other hand, inspired by an observation that the gradient of depth is small except at edge separating regions, we introduce a graph gradient consistency constraint to enforce that the graph gradient of depth is close to the thresholded gradient of guidance. We reinterpret the gradient thresholding model as variational optimization with sparsity constraint. In this way, we remedy the problem of structure discrepancy between depth and guidance. Finally, the internal and external regularizations are casted into a unified optimization framework, which can be efficiently addressed by ADMM. Experimental results demonstrate that our method outperforms the state-of-the-art with respect to both objective and subjective quality evaluations.

Journal ArticleDOI
TL;DR: An improved piecewise Lyapunov–Krasovskii functional is constructed to take full advantage of characteristic about real sampling pattern and some relaxed matrices proposed in the LKF are not necessarily positive definite.

Journal ArticleDOI
TL;DR: This study shows how to obtain least-squares solutions to initial and boundary value problems of ordinary nonlinear differential equations by using an approximate solution obtained by any existing integrator using a constrained expression derived from Theory of Connections.

Journal ArticleDOI
TL;DR: In this paper, the authors obtained asymptotics of large Hankel determinants whose weight depends on a one-cut regular potential and any number of Fisher-Hartwig singularities.
Abstract: We obtain asymptotics of large Hankel determinants whose weight depends on a one-cut regular potential and any number of Fisher-Hartwig singularities. This generalises two results: 1) a result of Berestycki, Webb and Wong [5] for root-type singularities, and 2) a result of Its and Krasovsky [37] for a Gaussian weight with a single jump-type singularity. We show that when we apply a piecewise constant thinning on the eigenvalues of a random Hermitian matrix drawn from a one-cut regular ensemble, the gap probability in the thinned spectrum, as well as correlations of the characteristic polynomial of the associated conditional point process, can be expressed in terms of these determinants.

Journal ArticleDOI
TL;DR: The cluster synchronization conditions are proposed under which the synchronization error dynamics with the dynamic event-triggered rule is stable and one numerical example is presented to show the effectiveness of the theoretical results.

Journal ArticleDOI
TL;DR: An iterative algorithm that exploits piecewise, convex relaxation approaches via disjunctive formulations to solve MINLPs that is different than conventional spatial branch-and-bound approaches, and succeeds in reducing the best known optimality gap for a hard, generalized pooling problem instance.
Abstract: In this work, we develop an adaptive, multivariate partitioning algorithm for solving nonconvex, Mixed-Integer Nonlinear Programs (MINLPs) with polynomial functions to global optimality. In particular, we present an iterative algorithm that exploits piecewise, convex relaxation approaches via disjunctive formulations to solve MINLPs that is different than conventional spatial branch-and-bound approaches. The algorithm partitions the domains of variables in an adaptive and non-uniform manner at every iteration to focus on productive areas of the search space. Furthermore, domain reduction techniques based on sequential, optimization-based bound-tightening and piecewise relaxation techniques, as a part of a presolve step, are integrated into the main algorithm. Finally, we demonstrate the effectiveness of the algorithm on well-known benchmark problems (including Pooling and Blending instances) from MINLPLib and compare our algorithm with state-of-the-art global optimization solvers. With our novel approach, we solve several large-scale instances, some of which are not solvable by state-of-the-art solvers. We also succeed in reducing the best known optimality gap for a hard, generalized pooling problem instance.

Journal ArticleDOI
TL;DR: The null-boundaries of WdW patches in 3D Poincare-AdS were studied in this article, where the selected boundary timeslice is an arbitrary (non-constant) function.
Abstract: We study the null-boundaries of Wheeler-de Witt (WdW) patches in three dimensional Poincare-AdS, when the selected boundary timeslice is an arbitrary (non-constant) function, presenting some useful analytic statements about them. Special attention will be given to the piecewise smooth nature of the null-boundaries, due to the emergence of caustics and null-null joint curves. This is then applied, in the spirit of one of our previous papers, to the problem of how the complexity of the CFT2 groundstate changes under a small local conformal transformation according to the action (CA) proposal. In stark contrast to the volume (CV) proposal, where this change is only proportional to the second order in the infinitesimal expansion parameter σ, we show that in the CA case we obtain terms of order σ and even σ log(σ). This has strong implications for the possible field-theory duals of the CA proposal, ruling out an entire class of them.