scispace - formally typeset
Search or ask a question

Showing papers on "Affine transformation published in 2018"


Proceedings ArticleDOI
18 Jun 2018
TL;DR: In this paper, a spatial feature transform (SFT) layer was proposed to generate affine transformation parameters for spatial-wise feature modulation in a single-image super-resolution network.
Abstract: Despite that convolutional neural networks (CNN) have recently demonstrated high-quality reconstruction for single-image super-resolution (SR), recovering natural and realistic texture remains a challenging problem. In this paper, we show that it is possible to recover textures faithful to semantic classes. In particular, we only need to modulate features of a few intermediate layers in a single network conditioned on semantic segmentation probability maps. This is made possible through a novel Spatial Feature Transform (SFT) layer that generates affine transformation parameters for spatial-wise feature modulation. SFT layers can be trained end-to-end together with the SR network using the same loss function. During testing, it accepts an input image of arbitrary size and generates a high-resolution image with just a single forward pass conditioned on the categorical priors. Our final results show that an SR network equipped with SFT can generate more realistic and visually pleasing textures in comparison to state-of-the-art SRGAN [27] and EnhanceNet [38].

597 citations


Proceedings Article
15 Feb 2018
TL;DR: In this paper, the authors demonstrate the existence of robust 3D adversarial objects, and present the first algorithm for synthesizing examples that are adversarial over a chosen distribution of transformations, synthesizing two-dimensional adversarial images that are robust to noise, distortion, and affine transformation.
Abstract: Standard methods for generating adversarial examples for neural networks do not consistently fool neural network classifiers in the physical world due to a combination of viewpoint shifts, camera noise, and other natural transformations, limiting their relevance to real-world systems. We demonstrate the existence of robust 3D adversarial objects, and we present the first algorithm for synthesizing examples that are adversarial over a chosen distribution of transformations. We synthesize two-dimensional adversarial images that are robust to noise, distortion, and affine transformation. We apply our algorithm to complex three-dimensional objects, using 3D-printing to manufacture the first physical adversarial objects. Our results demonstrate the existence of 3D adversarial objects in the physical world.

572 citations


Posted Content
TL;DR: It is shown that it is possible to recover textures faithful to semantic classes in a single network conditioned on semantic segmentation probability maps through a novel Spatial Feature Transform (SFT) layer that generates affine transformation parameters for spatial-wise feature modulation.
Abstract: Despite that convolutional neural networks (CNN) have recently demonstrated high-quality reconstruction for single-image super-resolution (SR), recovering natural and realistic texture remains a challenging problem. In this paper, we show that it is possible to recover textures faithful to semantic classes. In particular, we only need to modulate features of a few intermediate layers in a single network conditioned on semantic segmentation probability maps. This is made possible through a novel Spatial Feature Transform (SFT) layer that generates affine transformation parameters for spatial-wise feature modulation. SFT layers can be trained end-to-end together with the SR network using the same loss function. During testing, it accepts an input image of arbitrary size and generates a high-resolution image with just a single forward pass conditioned on the categorical priors. Our final results show that an SR network equipped with SFT can generate more realistic and visually pleasing textures in comparison to state-of-the-art SRGAN and EnhanceNet.

269 citations


Journal ArticleDOI
TL;DR: This paper proposes to affine transform the covariance matrices of every session/subject in order to center them with respect to a reference covariance matrix, making data from different sessions/subjects comparable, providing a significant improvement in the BCI transfer learning problem.
Abstract: Objective: This paper tackles the problem of transfer learning in the context of electroencephalogram (EEG)-based brain–computer interface (BCI) classification. In particular, the problems of cross-session and cross-subject classification are considered. These problems concern the ability to use data from previous sessions or from a database of past users to calibrate and initialize the classifier, allowing a calibration-less BCI mode of operation. Methods: Data are represented using spatial covariance matrices of the EEG signals, exploiting the recent successful techniques based on the Riemannian geometry of the manifold of symmetric positive definite (SPD) matrices. Cross-session and cross-subject classification can be difficult, due to the many changes intervening between sessions and between subjects, including physiological, environmental, as well as instrumental changes. Here, we propose to affine transform the covariance matrices of every session/subject in order to center them with respect to a reference covariance matrix, making data from different sessions/subjects comparable. Then, classification is performed both using a standard minimum distance to mean classifier, and through a probabilistic classifier recently developed in the literature, based on a density function (mixture of Riemannian Gaussian distributions) defined on the SPD manifold. Results: The improvements in terms of classification performances achieved by introducing the affine transformation are documented with the analysis of two BCI datasets. Conclusion and significance: Hence, we make, through the affine transformation proposed, data from different sessions and subject comparable, providing a significant improvement in the BCI transfer learning problem.

241 citations


Journal ArticleDOI
TL;DR: A novel affine formation maneuver control approach to achieve the two subtasks simultaneously relies on stress matrices, which can be viewed as generalized graph Laplacian matrices with both positive and negative edge weights.
Abstract: A multiagent formation control task usually consists of two subtasks. The first is to steer the agents to form a desired geometric pattern, and the second is to achieve desired collective maneuvers so that the centroid, orientation, scale, and other geometric parameters of the formation can be changed continuously. This paper proposes a novel affine formation maneuver control approach to achieve the two subtasks simultaneously. The proposed approach relies on stress matrices, which can be viewed as generalized graph Laplacian matrices with both positive and negative edge weights. The proposed control laws can track any target formation that is a time-varying affine transformation of a nominal configuration. The centroid, orientation, scales in different directions, and even geometric pattern of the formation can all be changed continuously. The desired formation maneuvers are only known by a small number of agents called leaders, and the rest of the agents called followers only need to follow the leaders. The proposed control laws are globally stable and do not require global reference frames if the required measurements can be measured in each agent's local reference frame.

181 citations


Journal ArticleDOI
TL;DR: In this article, the Moving Least Squares Material Point Method (MLS-MPM) is used to simulate material cutting, dynamic open boundaries, and two-way coupling with rigid bodies.
Abstract: In this paper, we introduce the Moving Least Squares Material Point Method (MLS-MPM). MLS-MPM naturally leads to the formulation of Affine Particle-In-Cell (APIC) [Jiang et al. 2015] and Polynomial Particle-In-Cell [Fu et al. 2017] in a way that is consistent with a Galerkin-style weak form discretization of the governing equations. Additionally, it enables a new stress divergence discretization that effortlessly allows all MPM simulations to run two times faster than before. We also develop a Compatible Particle-In-Cell (CPIC) algorithm on top of MLS-MPM. Utilizing a colored distance field representation and a novel compatibility condition for particles and grid nodes, our framework enables the simulation of various new phenomena that are not previously supported by MPM, including material cutting, dynamic open boundaries, and two-way coupling with rigid bodies. MLS-MPM with CPIC is easy to implement and friendly to performance optimization.

160 citations


Proceedings ArticleDOI
01 Jun 2018
TL;DR: In this article, a graph-cut RANSAC (GC-RANSAC) algorithm is proposed to separate inliers and outliers in the local optimization step, which is applied when a so-far-the-best model is found.
Abstract: A novel method for robust estimation, called Graph-Cut RANSAC1, GC-RANSAC in short, is introduced. To separate inliers and outliers, it runs the graph-cut algorithm in the local optimization (LO) step which is applied when a so-far-the-best model is found. The proposed LO step is conceptually simple, easy to implement, globally optimal and efficient. GC-RANSAC is shown experimentally, both on synthesized tests and real image pairs, to be more geometrically accurate than state-of-the-art methods on a range of problems, e.g. line fitting, homography, affine transformation, fundamental and essential matrix estimation. It runs in real-time for many problems at a speed approximately equal to that of the less accurate alternatives (in milliseconds on standard CPU).

159 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed an effective online background subtraction method, which can be robustly applied to real-time videos that have variations in both foreground and background.
Abstract: We propose an effective online background subtraction method, which can be robustly applied to practical videos that have variations in both foreground and background. Different from previous methods which often model the foreground as Gaussian or Laplacian distributions, we model the foreground for each frame with a specific mixture of Gaussians (MoG) distribution, which is updated online frame by frame. Particularly, our MoG model in each frame is regularized by the learned foreground/background knowledge in previous frames. This makes our online MoG model highly robust, stable and adaptive to practical foreground and background variations. The proposed model can be formulated as a concise probabilistic MAP model, which can be readily solved by EM algorithm. We further embed an affine transformation operator into the proposed model, which can be automatically adjusted to fit a wide range of video background transformations and make the method more robust to camera movements. With using the sub-sampling technique, the proposed method can be accelerated to execute more than 250 frames per second on average, meeting the requirement of real-time background subtraction for practical video processing tasks. The superiority of the proposed method is substantiated by extensive experiments implemented on synthetic and real videos, as compared with state-of-the-art online and offline background subtraction methods.

152 citations


Journal ArticleDOI
TL;DR: Form a Hadamard triple oninline-formula content-type="math/mathml" xmlns:mml="http://www.w3.org/1998/Math/MathML" alttext="double-struck upper R Superscript d"
Abstract:

Let R R be an expanding matrix with integer entries, and let B , L B,L be finite integer digit sets so that ( R , B , L ) (R,B,L) form a Hadamard triple on R d {\\mathbb {R}}^d in the sense that the matrix 1 | det R | [ e 2 π i R 1 b , ] L , b B \\begin{equation*} \\frac {1}{\\sqrt {|\\det R|}}\\left [e^{2\\pi i \\langle R^{-1}b,\\ell \\rangle }\\right ]_{\\ell \\in L,b\\in B} \\end{equation*} is unitary. We prove that the associated fractal self-affine measure μ = μ ( R , B ) \\mu = \\mu (R,B) obtained by an infinite convolution of atomic measures μ ( R , B ) = δ R 1 B δ R 2 B δ R 3 B \\begin{equation*} \\mu (R,B) = \\delta _{R^{-1} B}\\ast \\delta _{R^{-2}B}\\ast \\delta _{R^{-3}B}\\ast \\cdots \\end{equation*} is a spectral measure, i.e., it admits an orthonormal basis of exponential functions in L 2 ( μ ) L^2(\\mu ) . This settles a long-standing conjecture proposed by Jorgensen and Pedersen and studied by many other authors. Moreover, we also show that if we relax the Hadamard triple condition to an almost-Parseval-frame condition, then we obtain a sufficient condition for a self-affine measure to admit Fourier frames.

138 citations


Proceedings Article
27 Sep 2018
TL;DR: In this paper, a gradient norm preserving activation function, GroupSort, is proposed to combine a group-sort activation function with norm-constrained weight matrices, which can achieve provable adversarial robustness guarantees with little cost to accuracy.
Abstract: Training neural networks under a strict Lipschitz constraint is useful for provable adversarial robustness, generalization bounds, interpretable gradients, and Wasserstein distance estimation. By the composition property of Lipschitz functions, it suffices to ensure that each individual affine transformation or nonlinear activation is 1-Lipschitz. The challenge is to do this while maintaining the expressive power. We identify a necessary property for such an architecture: each of the layers must preserve the gradient norm during backpropagation. Based on this, we propose to combine a gradient norm preserving activation function, GroupSort, with norm-constrained weight matrices. We show that norm-constrained GroupSort architectures are universal Lipschitz function approximators. Empirically, we show that norm-constrained GroupSort networks achieve tighter estimates of Wasserstein distance than their ReLU counterparts and can achieve provable adversarial robustness guarantees with little cost to accuracy.

114 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that any ergodic measure invariant under the action of the upper triangular subgroup of a flow is supported on an invariant affine submanifold.
Abstract: We prove some ergodic-theoretic rigidity properties of the action of on moduli space. In particular, we show that any ergodic measure invariant under the action of the upper triangular subgroup of is supported on an invariant affine submanifold. The main theorems are inspired by the results of several authors on unipotent flows on homogeneous spaces, and in particular by Ratner’s seminal work.

Posted Content
TL;DR: This work identifies a necessary property for such an architecture: each of the layers must preserve the gradient norm during backpropagation, and proposes to combine a gradient norm preserving activation function, GroupSort, with norm-constrained weight matrices that are universal Lipschitz function approximators.
Abstract: Training neural networks under a strict Lipschitz constraint is useful for provable adversarial robustness, generalization bounds, interpretable gradients, and Wasserstein distance estimation. By the composition property of Lipschitz functions, it suffices to ensure that each individual affine transformation or nonlinear activation is 1-Lipschitz. The challenge is to do this while maintaining the expressive power. We identify a necessary property for such an architecture: each of the layers must preserve the gradient norm during backpropagation. Based on this, we propose to combine a gradient norm preserving activation function, GroupSort, with norm-constrained weight matrices. We show that norm-constrained GroupSort architectures are universal Lipschitz function approximators. Empirically, we show that norm-constrained GroupSort networks achieve tighter estimates of Wasserstein distance than their ReLU counterparts and can achieve provable adversarial robustness guarantees with little cost to accuracy.

Book ChapterDOI
08 Sep 2018
TL;DR: It is shown that maximizing geometric repeatability does not lead to local regions, a.k.a features, that are reliably matched and this necessitates descriptor-based learning, and a novel hard negative-constant loss function is proposed for learning of affine regions.
Abstract: A method for learning local affine-covariant regions is presented. We show that maximizing geometric repeatability does not lead to local regions, a.k.a features, that are reliably matched and this necessitates descriptor-based learning. We explore factors that influence such learning and registration: the loss function, descriptor type, geometric parametrization and the trade-off between matchability and geometric accuracy and propose a novel hard negative-constant loss function for learning of affine regions. The affine shape estimator – AffNet – trained with the hard negative-constant loss outperforms the state-of-the-art in bag-of-words image retrieval and wide baseline stereo. The proposed training process does not require precisely geometrically aligned patches. The source codes and trained weights are available at https://github.com/ducha-aiki/affnet.

Journal ArticleDOI
TL;DR: This paper addresses the problem of delay-dependent robust and reliable static output feedback (SOF) control for a class of uncertain discrete-time Takagi–Sugeno fuzzy-affine systems with time-varying delay and actuator faults in a singular system framework.
Abstract: This paper addresses the problem of delay-dependent robust and reliable $\mathscr {H}_{\infty }$ static output feedback (SOF) control for a class of uncertain discrete-time Takagi–Sugeno fuzzy-affine (FA) systems with time-varying delay and actuator faults in a singular system framework. The Markov chain is employed to describe the actuator faults behaviors. In particular, by utilizing a system augmentation approach, the conventional closed-loop system is converted into a singular FA system. By constructing a piecewise-Markovian Lyapunov–Krasovskii functional, a new $\mathscr {H}_{\infty }$ performance analysis criterion is then presented, where a novel summation inequality and S-procedure are succeedingly employed. Subsequently, thanks to the special structure of the singular system formulation, the piecewise-affine SOF controller design is proposed via a convex program. Lastly, illustrative examples are given to show the efficacy and less conservatism of the presented approach.

Journal ArticleDOI
TL;DR: A simplified affine motion model-based coding framework to overcome the limitation of a translational motion model and maintain low-computational complexity is studied.
Abstract: In this paper, we study a simplified affine motion model-based coding framework to overcome the limitation of a translational motion model and maintain low-computational complexity. The proposed framework mainly has three key contributions. First, we propose to reduce the number of affine motion parameters from 6 to 4. The proposed four-parameter affine motion model can not only handle most of the complex motions in natural videos, but also save the bits for two parameters. Second, to efficiently encode the affine motion parameters, we propose two motion prediction modes, i.e., an advanced affine motion vector prediction scheme combined with a gradient-based fast affine motion estimation algorithm and an affine model merge scheme, where the latter attempts to reuse the affine motion parameters (instead of the motion vectors) of neighboring blocks. Third, we propose two fast affine motion compensation algorithms. One is the one-step sub-pixel interpolation that reduces the computations of each interpolation. The other is the interpolation-precision-based adaptive block size motion compensation that performs motion compensation at the block level rather than the pixel level to reduce the number of interpolation. Our proposed techniques have been implemented based on the state-of-the-art high-efficiency video coding standard, and the experimental results show that the proposed techniques altogether achieve, on average, 11.1% and 19.3% bits saving for random access and low-delay configurations, respectively, on typical video sequences that have rich rotation or zooming motions. Meanwhile, the computational complexity increases of both the encoder and the decoder are within an acceptable range.

Journal ArticleDOI
TL;DR: In this article, the grey polynomial model is used to solve the problem that the original sequence is in accord with a more general trend rather than the special homogeneous or non-homogeneous trend.

Journal ArticleDOI
TL;DR: An enhanced structure for Differential Evolution algorithm with less control parameters to be tuned is proposed and an enhanced mutation strategy with time stamp mechanism is advanced in this paper.
Abstract: Optimization demands are ubiquitous in science and engineering. The key point is that the approach to tackle a complex optimization problem should not itself be difficult. Differential Evolution (DE) is such a simple method, and it is arguably a very powerful stochastic real-parameter algorithm for single-objective optimization. However, the performance of DE is highly dependent on control parameters and mutation strategies. Both tuning the control parameters and selecting the proper mutation strategy are still tedious but important tasks for users. In this paper, we proposed an enhanced structure for DE algorithm with less control parameters to be tuned. The crossover rate control parameter Cr is replaced by an automatically generated evolution matrix and the control parameter F can be renewed in an adaptive manner during the whole evolution. Moreover, an enhanced mutation strategy with time stamp mechanism is advanced as well in this paper. CEC2013 test suite for real-parameter single objective optimization is employed in the verification of the proposed algorithm. Experiment results show that our proposed algorithm is competitive with several well-known DE variants.

Journal ArticleDOI
01 Jul 2018
TL;DR: An uncertainty-based control Lyapunov function which utilizes the model fidelity estimate of a Gaussian process model to drive the system in areas near training data with low uncertainty, which maximizes the probability that the system is stabilized in the presence of power constraints using equivalence to dynamic programming.
Abstract: Data-driven approaches in control allow for identification of highly complex dynamical systems with minimal prior knowledge. However, properly incorporating model uncertainty in the design of a stabilizing control law remains challenging. Therefore, this letter proposes a control Lyapunov function framework which semiglobally asymptotically stabilizes a partially unknown fully actuated control affine system with high probability. We propose an uncertainty-based control Lyapunov function which utilizes the model fidelity estimate of a Gaussian process model to drive the system in areas near training data with low uncertainty. We show that this behavior maximizes the probability that the system is stabilized in the presence of power constraints using equivalence to dynamic programming. A simulation on a nonlinear system is provided.

Journal ArticleDOI
Guannan Qu1, Na Li1
TL;DR: In this paper, the global exponential stability of primal-dual gradient dynamics for convex optimization with strongly-convex and smooth objectives and affine equality or inequality constraints is studied.
Abstract: Continuous time primal-dual gradient dynamics that find a saddle point of a Lagrangian of an optimization problem have been widely used in systems and control. While the global asymptotic stability of such dynamics has been well-studied, it is less studied whether they are globally exponentially stable. In this paper, we study the primal-dual gradient dynamics for convex optimization with strongly-convex and smooth objectives and affine equality or inequality constraints, and prove global exponential stability for such dynamics. Bounds on decaying rates are provided.

Proceedings ArticleDOI
12 Mar 2018
TL;DR: A new end-to-end deep neural network predicting forgery masks to the image copy-move forgery detection problem is introduced, which uses a convolutional neural network to extract block-like features from an image, compute self-correlations between different blocks, use a pointwise feature extractor to locate matching points, and reconstruct a forgery mask through a deconvolutional network.
Abstract: In this paper, for the first time, we introduce a new end-to-end deep neural network predicting forgery masks to the image copy-move forgery detection problem. Specifically, we use a convolutional neural network to extract block-like features from an image, compute self-correlations between different blocks, use a pointwise feature extractor to locate matching points, and reconstruct a forgery mask through a deconvolutional network. Unlike classic solutions requiring multiple stages of training and parameter tuning, ranging from feature extraction to postprocessing, the proposed solution is fully trainable and can be jointly optimized for the forgery mask reconstruction loss. Our experimental results demonstrate that the proposed method achieves better forgery detection performance than classic approaches relying on different features and matching schemes, and it is more robust against various known attacks like affine transformation, JPEG compression, blurring, etc.

Proceedings Article
15 Feb 2018
TL;DR: In this article, the authors argue that neural networks can be fixed up to a global scale constant, with little or no loss of accuracy for most tasks, allowing memory and computational benefits.
Abstract: Neural networks are commonly used as models for classification for a wide variety of tasks. Typically, a learned affine transformation is placed at the end of such models, yielding a per-class value used for classification. This classifier can have a vast number of parameters, which grows linearly with the number of possible classes, thus requiring increasingly more resources. In this work we argue that this classifier can be fixed, up to a global scale constant, with little or no loss of accuracy for most tasks, allowing memory and computational benefits. Moreover, we show that by initializing the classifier with a Hadamard matrix we can speed up inference as well. We discuss the implications for current understanding of neural network models.

Journal ArticleDOI
TL;DR: The objective is to design a memory piecewise affine (PWA) controller by using past output measurements to guarantee the asymptotic stability of the resulting closed-loop system with a prescribed finite frequency $\mathscr H_{\infty }$ performance.
Abstract: In this paper, we will investigate the problem of finite frequency memory fixed-order output feedback controller design for Takagi–Sugeno (T–S) fuzzy affine systems. It is assumed that the disturbances reside in a finite frequency range, i.e., the low, middle, or high frequency range. The objective is to design a memory piecewise affine (PWA) controller by using past output measurements to guarantee the asymptotic stability of the resulting closed-loop system with a prescribed finite frequency $\mathscr H_{\infty }$ performance. Via the system state-input augmentation, a novel descriptor system approach is proposed to facilitate the controller design. All the design conditions are formulated in the form of linear matrix inequalities. It is also proven that the $\mathscr H_{\infty }$ performance can be improved with the memory control strategy. Finally, simulation studies are presented to show the effectiveness of the proposed design method.

Posted Content
TL;DR: A rigorous bridge between deep networks (DNs) and approximation theory via spline functions and operators is built and a simple penalty term is proposed that can be added to the cost function of any DN learning algorithm to force the templates to be orthogonal with each other.
Abstract: We build a rigorous bridge between deep networks (DNs) and approximation theory via spline functions and operators. Our key result is that a large class of DNs can be written as a composition of max-affine spline operators (MASOs), which provide a powerful portal through which to view and analyze their inner workings. For instance, conditioned on the input signal, the output of a MASO DN can be written as a simple affine transformation of the input. This implies that a DN constructs a set of signal-dependent, class-specific templates against which the signal is compared via a simple inner product; we explore the links to the classical theory of optimal classification via matched filters and the effects of data memorization. Going further, we propose a simple penalty term that can be added to the cost function of any DN learning algorithm to force the templates to be orthogonal with each other; this leads to significantly improved classification performance and reduced overfitting with no change to the DN architecture. The spline partition of the input signal space that is implicitly induced by a MASO directly links DNs to the theory of vector quantization (VQ) and $K$-means clustering, which opens up new geometric avenue to study how DNs organize signals in a hierarchical fashion. To validate the utility of the VQ interpretation, we develop and validate a new distance metric for signals and images that quantifies the difference between their VQ encodings. (This paper is a significantly expanded version of A Spline Theory of Deep Learning from ICML 2018.)

Journal ArticleDOI
TL;DR: In this article, a relative version of the abstract affine representability theorem in Ahomotopy theory was established for generically trivial torsors under isotropic reductive groups.
Abstract: We establish a relative version of the abstract “affine representability” theorem in Ahomotopy theory from Part I of this paper. We then prove some A-invariance statements for generically trivial torsors under isotropic reductive groups over infinite fields analogous to the Bass-Quillen conjecture for vector bundles. Putting these ingredients together, we deduce representability theorems for generically trivial torsors under isotropic reductive groups and for associated homogeneous spaces in A-homotopy theory.

Journal ArticleDOI
TL;DR: In this article, a back-stepping controller with prescribed performance for air-breathing hypersonic vehicles (AHVs) utilizing non-affine models is developed.
Abstract: This study develops a novel back-stepping controller with prescribed performance for air-breathing hypersonic vehicles (AHVs) utilizing non-affine models. For the velocity dynamics, a non-affine control law is addressed to achieve prescribed tracking performance. The altitude subsystem is rewritten as a strict feedback formulation to facilitate the back-stepping control system design via a model transformation approach. At each step of back-stepping design, performance functions are constructed to force tracking errors to fall within prescribed boundaries, based on which desired transient performance and steady-state performance are guaranteed for both velocity and altitude control subsystems. Furthermore, the exploited controllers are accurate model independent, which guarantees control laws with satisfactory robustness against unknown uncertainties. Meanwhile, the proposed control scheme can cope with unknown control gains. By the Lyapunov stability theory, the stability of the closed-loop control system is confirmed. Finally, numerical simulations are given for an AHV to validate the effectiveness of the proposed control approach.

Journal ArticleDOI
TL;DR: This paper presents an approximate optimal distributed control scheme for a known interconnected system composed of input affine nonlinear subsystems using event-triggered state and output feedback via a novel hybrid learning scheme to reduce the convergence time for the learning algorithm.
Abstract: This paper presents an approximate optimal distributed control scheme for a known interconnected system composed of input affine nonlinear subsystems using event-triggered state and output feedback via a novel hybrid learning scheme. First, the cost function for the overall system is redefined as the sum of cost functions of individual subsystems. A distributed optimal control policy for the interconnected system is developed using the optimal value function of each subsystem. To generate the optimal control policy, forward-in-time, neural networks are employed to reconstruct the unknown optimal value function at each subsystem online. In order to retain the advantages of event-triggered feedback for an adaptive optimal controller, a novel hybrid learning scheme is proposed to reduce the convergence time for the learning algorithm. The development is based on the observation that, in the event-triggered feedback, the sampling instants are dynamic and results in variable interevent time. To relax the requirement of entire state measurements, an extended nonlinear observer is designed at each subsystem to recover the system internal states from the measurable feedback. Using a Lyapunov-based analysis, it is demonstrated that the system states and the observer errors remain locally uniformly ultimately bounded and the control policy converges to a neighborhood of the optimal policy. Simulation results are presented to demonstrate the performance of the developed controller.

Journal ArticleDOI
TL;DR: In this article, a braided tensor category structure with a twist on a semisimple category of modules for an affine Lie algebra at an admissible level was constructed, and it was shown that this category is rigid and thus is a ribbon category.
Abstract: Using the tensor category theory developed by Lepowsky, Zhang and the second author, we construct a braided tensor category structure with a twist on a semisimple category of modules for an affine Lie algebra at an admissible level. We conjecture that this braided tensor category is rigid and thus is a ribbon category. We also give conjectures on the modularity of this category and on the equivalence with a suitable quantum group tensor category. In the special case that the affine Lie algebra is $${\widehat{\mathfrak{sl}}_2}$$ , we prove the rigidity and modularity conjectures.

Posted Content
TL;DR: This work proposes a self-supervised learning method for affine image registration on 3D medical images that achieves better overall performance on registration of images from different patients and modalities with 100x speed-up in execution time.
Abstract: In this work, we propose a self-supervised learning method for affine image registration on 3D medical images. Unlike optimisation-based methods, our affine image registration network (AIRNet) is designed to directly estimate the transformation parameters between two input images without using any metric, which represents the quality of the registration, as the optimising function. But since it is costly to manually identify the transformation parameters between any two images, we leverage the abundance of cheap unlabelled data to generate a synthetic dataset for the training of the model. Additionally, the structure of AIRNet enables us to learn the discriminative features of the images which are useful for registration purpose. Our proposed method was evaluated on magnetic resonance images of the axial view of human brain and compared with the performance of a conventional image registration method. Experiments demonstrate that our approach achieves better overall performance on registration of images from different patients and modalities with 100x speed-up in execution time.

Journal ArticleDOI
TL;DR: A case study of a 12 kV radial distribution system demonstrates that decentralized affine controllers can perform close to optimal, and provides an efficient method to bound their suboptimality via the optimal solution of another finite-dimensional conic program.
Abstract: We consider the decentralized control of radial distribution systems with controllable photovoltaic inverters and energy storage resources. For such systems, we investigate the problem of designing fully decentralized controllers that minimize the expected cost of balancing demand, while guaranteeing the satisfaction of individual resource and distribution system voltage constraints. Employing a linear approximation of the branch flow model, we formulate this problem as the design of a decentralized disturbance-feedback controller that minimizes the expected value of a convex quadratic cost function, subject to robust convex quadratic constraints on the system state and input. As such problems are, in general, computationally intractable, we derive a tractable inner approximation to this decentralized control problem, which enables the efficient computation of an affine control policy via the solution of a finite-dimensional conic program. As affine policies are, in general, suboptimal for the family of systems considered, we provide an efficient method to bound their suboptimality via the optimal solution of another finite-dimensional conic program. A case study of a 12 kV radial distribution system demonstrates that decentralized affine controllers can perform close to optimal.

Proceedings ArticleDOI
11 Apr 2018
TL;DR: In this paper, the authors propose to decompose reach set computations such that set operations are performed in low dimensions, while matrix operations like exponentiation are carried out in the full dimension.
Abstract: Approximating the set of reachable states of a dynamical system is an algorithmic yet mathematically rigorous way to reason about its safety. Although progress has been made in the development of efficient algorithms for affine dynamical systems, available algorithms still lack scalability to ensure their wide adoption in the industrial setting. While modern linear algebra packages are efficient for matrices with tens of thousands of dimensions, set-based image computations are limited to a few hundred. We propose to decompose reach set computations such that set operations are performed in low dimensions, while matrix operations like exponentiation are carried out in the full dimension. Our method is applicable both in dense- and discrete-time settings. For a set of standard benchmarks, it shows a speed-up of up to two orders of magnitude compared to the respective state-of-the-art tools, with only modest losses in accuracy. For the dense-time case, we show an experiment with more than 10,000 variables, roughly two orders of magnitude higher than possible with previous approaches.