scispace - formally typeset
Search or ask a question

Showing papers on "Linear map published in 2019"


Proceedings Article
01 Jan 2019
TL;DR: In this article, the authors present a general theory of group equivariant convolutional neural networks (G-CNNs) on homogeneous spaces such as Euclidean space and the sphere.
Abstract: We present a general theory of Group equivariant Convolutional Neural Networks (G-CNNs) on homogeneous spaces such as Euclidean space and the sphere. Feature maps in these networks represent fields on a homogeneous base space, and layers are equivariant maps between spaces of fields. The theory enables a systematic classification of all existing G-CNNs in terms of their symmetry group, base space, and field type. We also answer a fundamental question: what is the most general kind of equivariant linear map between feature spaces (fields) of given types? We show that such maps correspond one-to-one with generalized convolutions with an equivariant kernel, and characterize the space of such kernels.

206 citations


Proceedings Article
13 May 2019
TL;DR: In this paper, the authors consider networks with a single hidden layer, obtained by summing channels formed by applying an equivariant linear operator, a pointwise non-linearity, and either an invariant or equivariant linear output layer.
Abstract: Graph Neural Networks (GNN) come in many flavors, but should always be either invariant (permutation of the nodes of the input graph does not affect the output) or \emph{equivariant} (permutation of the input permutes the output). In this paper, we consider a specific class of invariant and equivariant networks, for which we prove new universality theorems. More precisely, we consider networks with a single hidden layer, obtained by summing channels formed by applying an equivariant linear operator, a pointwise non-linearity, and either an invariant or equivariant linear output layer. Recently, Maron et al. (2019) showed that by allowing higher-order tensorization inside the network, universal invariant GNNs can be obtained. As a first contribution, we propose an alternative proof of this result, which relies on the Stone-Weierstrass theorem for algebra of real-valued functions. Our main contribution is then an extension of this result to the \emph{equivariant} case, which appears in many practical applications but has been less studied from a theoretical point of view. The proof relies on a new generalized Stone-Weierstrass theorem for algebra of equivariant functions, which is of independent interest. Additionally, unlike many previous works that consider a fixed number of nodes, our results show that a GNN defined by a single set of parameters can approximate uniformly well a function defined on graphs of varying size.

169 citations


Journal ArticleDOI
TL;DR: A third approach to deriving the Clarke and Park transformation matrices: a geometric interpretation exploits properties of the linear transformation using the Cartesian representation and introduces the locus diagram of a three-phase quantity and shows how these diagrams have applications in power quality.
Abstract: Transformations between abc, stationary dq0 ( $\alpha \beta 0$ ) and rotating dq0 reference-frames are used extensively in the analysis and control of three-phase technologies such as machines and inverters. Previous work on deriving the matrices describing these transformations follows one of two approaches. The first approach derives Clarke's matrix by modifying symmetrical components. Park's matrix can be subsequently found from a rotation matrix. The second approach derives Park's matrix using trigonometric projection by interpreting the transformation as a rotation in the plane of the cross-section of a machine. Then, Clarke's matrix can be found trivially using a reference angle of zero in Park's matrix. This paper presents a third approach to deriving the Clarke and Park transformation matrices: a geometric interpretation. The approach exploits properties of the linear transformation using the Cartesian representation. We introduce the locus diagram of a three-phase quantity and show how these diagrams have applications in power quality. We show that, unlike a phasor diagram, a single locus diagram can fully represent a three-phase system with harmonics.

76 citations


Book ChapterDOI
18 Aug 2019
TL;DR: This work puts forward the notion of subvector commitments (SVC), which allows one to open a committed vector at a set of positions, where the opening size is independent of length of the committed vector and the number of positions to be opened.
Abstract: We put forward the notion of subvector commitments (SVC): An SVC allows one to open a committed vector at a set of positions, where the opening size is independent of length of the committed vector and the number of positions to be opened. We propose two constructions under variants of the root assumption and the CDH assumption, respectively. We further generalize SVC to a notion called linear map commitments (LMC), which allows one to open a committed vector to its images under linear maps with a single short message, and propose a construction over pairing groups.

73 citations


Posted Content
TL;DR: The results show that a GNN defined by a single set of parameters can approximate uniformly well a function defined on graphs of varying size.
Abstract: Graph Neural Networks (GNN) come in many flavors, but should always be either invariant (permutation of the nodes of the input graph does not affect the output) or equivariant (permutation of the input permutes the output). In this paper, we consider a specific class of invariant and equivariant networks, for which we prove new universality theorems. More precisely, we consider networks with a single hidden layer, obtained by summing channels formed by applying an equivariant linear operator, a pointwise non-linearity and either an invariant or equivariant linear operator. Recently, Maron et al. (2019) showed that by allowing higher-order tensorization inside the network, universal invariant GNNs can be obtained. As a first contribution, we propose an alternative proof of this result, which relies on the Stone-Weierstrass theorem for algebra of real-valued functions. Our main contribution is then an extension of this result to the equivariant case, which appears in many practical applications but has been less studied from a theoretical point of view. The proof relies on a new generalized Stone-Weierstrass theorem for algebra of equivariant functions, which is of independent interest. Finally, unlike many previous settings that consider a fixed number of nodes, our results show that a GNN defined by a single set of parameters can approximate uniformly well a function defined on graphs of varying size.

67 citations


Journal ArticleDOI
20 Apr 2019
TL;DR: This work proposes a way to perform linear operations using complex optical media such as multimode fibers or scattering media as a computational platform driven by wavefront shaping to offer the prospect of reconfigurable, robust, and easy-to-fabricate linear optical analog computation units.
Abstract: Performing linear operations using optical devices is a crucial building block in many fields ranging from telecommunications to optical analog computation and machine learning. For many of these applications, key requirements are robustness to fabrication inaccuracies, reconfigurability, and scalability. We propose a way to perform linear operations using complex optical media such as multimode fibers or scattering media as a computational platform driven by wavefront shaping. Given a large random transmission matrix representing light propagation in such a medium, we can extract any desired smaller linear operator by finding suitable input and output projectors. We demonstrate this concept by finding input wavefronts using a spatial light modulator that cause the complex medium to act as a desired complex-valued linear operator on the optical field. We experimentally build several 16×16 operators and discuss the fundamental limits of the scalability of our approach. It offers the prospect of reconfigurable, robust, and easy-to-fabricate linear optical analog computation units.

62 citations


Journal ArticleDOI
TL;DR: This paper demonstrates the experimental analysis of programming a $4\times 4$ reconfigurable optical processor using a unitary transformation matrix implemented by a single layer neural network and achieves 72% classification accuracy compared to the 98.9% of the simulated optical neural network on a digital computer.
Abstract: Implementing any linear transformation matrix through the optical channels of an on-chip reconfigurable multiport interferometer has been emerging as a promising technique for various fields of study, such as information processing and optical communication systems. Recently, the use of multiport optical interferometric-based linear structures in neural networks has attracted a great deal of attention. Optical neural networks have proven to be promising in terms of computational speed and power efficiency, allowing for the increasingly large neural networks that are being created today. This paper demonstrates the experimental analysis of programming a $4\times 4$ reconfigurable optical processor using a unitary transformation matrix implemented by a single layer neural network. To this end, the Mach-Zehnder interferometers (MZIs) in the structure are first experimentally calibrated to circumvent the random phase errors originating from fabrication process variations. The linear transformation matrix of the given application can be implemented by the successive multiplications of the unitary transformation matrices of the constituent MZIs in the optical structure. The required phase shifts to construct the linear transformation matrix by means of the optical processor are determined theoretically. Using this method, a single layer neural network is trained to classify a synthetic linearly separable multivariate Gaussian dataset on a conventional computer using a stochastic optimization algorithm. Additionally, the effect of the phase errors and uncertainties caused by the experimental equipment inaccuracies and the device components imperfections is also analyzed and simulated. Finally, the optical processor is experimentally programmed by applying the obtained phase shifts from the matrix decomposition process to the corresponding phase shifters in the device. The experimental results show that the optical processor achieves 72 $\%$ classification accuracy compared to the 98.9 $\%$ of the simulated optical neural network on a digital computer.

47 citations


Journal ArticleDOI
TL;DR: In this article, the authors analyze algorithms for approximating a function f(x)=Φx mapping ℜd to ℘d using deep linear neural networks, that learn a function h parameterized by matrices Θ1,…,ΘL and defined by h(x).
Abstract: We analyze algorithms for approximating a function f(x)=Φx mapping ℜd to ℜd using deep linear neural networks, that is, that learn a function h parameterized by matrices Θ1,…,ΘL and defined by h(x)...

42 citations


Journal ArticleDOI
TL;DR: Based on the hybrid modulation coupling (HMC) pattern, a class of higher dimensional (HD) hyperchaotic maps is proposed using three one-dimensional (1D) seed maps as mentioned in this paper.
Abstract: Based on the hybrid modulation coupling (HMC) pattern, a class of higher-dimensional (HD) hyperchaotic maps is proposed using three one-dimensional (1D) seed maps. The seed maps are chaotic maps or the combination of chaotic maps and non-chaotic maps. Taking the HMC of iterative chaotic map with infinite collapse (ICMIC), Sine map and a linear map (ISL-HMC) as an example, the equilibrium points are mathematically analyzed. The dynamical performance of the 3D ISL-HMC map is evaluated by phase diagram, Lyapunov exponents (LEs), bifurcation diagram and chaos diagram. Furthermore, compared with existing chaotic maps, complexity and distribution characteristic are analyzed. As application of the ISL-HMC map, a pseudorandom number generator (PRNG) is designed and tested by NIST SP 800-22 and TestU01. Experimental results show that the ISL-HMC map has rich dynamical behaviors and good randomness. So this class of HD hyperchaotic maps is a potential model for cryptography and other applications.

31 citations


Journal ArticleDOI
TL;DR: The notion of continuous valuations on convex functions has been studied by Alexandrov and Blocki as discussed by the authors, who constructed a natural linear map from the former space to the latter and proved that it has dense image and infinite dimensional kernel.
Abstract: The notion of a valuation on convex bodies is very classical. The notion of a valuation on a class of functions was recently introduced and studied by M. Ludwig and others. We study an explicit relation between continuous valuations on convex functions which are invariant under adding arbitrary linear functionals, and translations invariant continuous valuations on convex bodies. More precisely, we construct a natural linear map from the former space to the latter and prove that it has dense image and infinite dimensional kernel. The proof uses the author's irreducibility theorem and few properties of the real Monge-Ampere operators due to A.D. Alexandrov and Z. Blocki. Fur- thermore we show how to use complex, quaternionic, and octonionic Monge-Ampere operators to construct more examples of continuous valuations on convex functions in an analogous way.

30 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied the solutions of infinite dimensional linear inverse problems over Banach spaces, where the regularizer is defined as the total variation of a linear mapping of the function to recover, while the data fitting term is a near arbitrary convex function.
Abstract: We study the solutions of infinite dimensional linear inverse problems over Banach spaces. The regularizer is defined as the total variation of a linear mapping of the function to recover, while the data fitting term is a near arbitrary convex function. The first contribution is about the solu-tion's structure: we show that under suitable assumptions, there always exist an m-sparse solution, where m is the number of linear measurements of the signal. Our second contribution is about the computation of the solution. While most existing works first discretize the problem, we show that exacts solutions of the infinite dimensional problem can be obtained by solving two consecutive finite dimensional convex programs. These results extend recent advances in the understanding of total-variation reg-ularized problems.

Journal ArticleDOI
TL;DR: In this paper, a proximal algorithm for minimizing objective functions consisting of three summands is proposed, i.e., the composition of a nonsmooth function with a linear operator, another nonssmooth function (with eac...
Abstract: We propose a proximal algorithm for minimizing objective functions consisting of three summands: the composition of a nonsmooth function with a linear operator, another nonsmooth function (with eac...

Journal ArticleDOI
TL;DR: In this article, the authors study the problem of perturbation of a quaternionic normal operator in a Hilbert space by making use of the concepts of $S$-spectrum and of slice hyperholomorphicity of the $S $-resolvent operators.
Abstract: The theory of quaternionic operators has applications in several different fields such as quantum mechanics, fractional evolution problems, and quaternionic Schur analysis, just to name a few. The main difference between complex and quaternionic operator theory is based on the definition of spectrum. In fact, in quaternionic operator theory the classical notion of resolvent operator and the one of spectrum need to be replaced by the two $S$-resolvent operators and the $S$-spectrum. This is a consequence of the non-commutativity of the quaternionic setting. Indeed, the $S$-spectrum of a quaternionic linear operator $T$ is given by the non invertibility of a second order operator. This presents new challenges which makes our approach to perturbation theory of quaternionic operators different from the classical case. In this paper we study the problem of perturbation of a quaternionic normal operator in a Hilbert space by making use of the concepts of $S$-spectrum and of slice hyperholomorphicity of the $S$-resolvent operators. For this new setting we prove results on the perturbation of quaternionic normal operators by operators belonging to a Schatten class and give conditions which guarantee the existence of a nontrivial hyperinvariant subspace of a quaternionic linear operator.

Journal ArticleDOI
TL;DR: The comparison of simulated results with existing algorithms has shown the proposed algorithm is better in encryption robustness and better in noise repulsion during transmission.
Abstract: An innovative and highly efficient color image encryption technique based on the concept of linear transformation is presented in this paper. A 24-bit color image is split into the channels called Red, Green, Blue and afterwards each channel is permuted via cyclic shift on rows and columns using chaotic sequences. For substitution, pseudo-random numbers are generated using chaotic maps, which are then build into pseudo-random matrices through Linear Transformations. These random matrices are bonded with permuted colored channel under Exclusive-OR (XOR) operation. The control parameters and initial conditions for chaotic maps are obtained from 256-bits hash value of the original image to avoid the chosen-plaintext attacks. The comparison of simulated results with existing algorithms has shown the proposed algorithm is better in encryption robustness and better in noise repulsion during transmission. The proposed technique is most suitable for real time applications due to better efficiency.

Journal Article
TL;DR: This paper adopts the approach of using a sufficiently dense set of virtual observation locations where the constraint is required to hold, and derive the exact posterior for a conjugate likelihood of the constrained Gaussian Process.
Abstract: This paper presents an approach for constrained Gaussian Process (GP) regression where we assume that a set of linear transformations of the process are bounded. It is motivated by machine learning applications for high-consequence engineering systems, where this kind of information is often made available from phenomenological knowledge. We consider a GP $f$ over functions on $\mathcal{X} \subset \mathbb{R}^{n}$ taking values in $\mathbb{R}$, where the process $\mathcal{L}f$ is still Gaussian when $\mathcal{L}$ is a linear operator. Our goal is to model $f$ under the constraint that realizations of $\mathcal{L}f$ are confined to a convex set of functions. In particular, we require that $a \leq \mathcal{L}f \leq b$, given two functions $a$ and $b$ where $a < b$ pointwise. This formulation provides a consistent way of encoding multiple linear constraints, such as shape-constraints based on e.g. boundedness, monotonicity or convexity. We adopt the approach of using a sufficiently dense set of virtual observation locations where the constraint is required to hold, and derive the exact posterior for a conjugate likelihood. The results needed for stable numerical implementation are derived, together with an efficient sampling scheme for estimating the posterior process.

Journal ArticleDOI
01 Sep 2019-Chaos
TL;DR: In this article, the Koopman operator with symmetries is studied and the symmetry arguments can be used to simplify the analysis of nonlinear dynamical systems with nonlinear dynamics.
Abstract: Nonlinear dynamical systems with symmetries exhibit a rich variety of behaviors, often described by complex attractor-basin portraits and enhanced and suppressed bifurcations. Symmetry arguments provide a way to study these collective behaviors and to simplify their analysis. The Koopman operator is an infinite dimensional linear operator that fully captures a system's nonlinear dynamics through the linear evolution of functions of the state space. Importantly, in contrast with local linearization, it preserves a system's global nonlinear features. We demonstrate how the presence of symmetries affects the Koopman operator structure and its spectral properties. In fact, we show that symmetry considerations can also simplify finding the Koopman operator approximations using the extended and kernel dynamic mode decomposition methods (EDMD and kernel DMD). Specifically, representation theory allows us to demonstrate that an isotypic component basis induces a block diagonal structure in operator approximations, revealing hidden organization. Practically, if the symmetries are known, the EDMD and kernel DMD methods can be modified to give more efficient computation of the Koopman operator approximation and its eigenvalues, eigenfunctions, and eigenmodes. Rounding out the development, we discuss the effect of measurement noise.

Journal ArticleDOI
TL;DR: In this article, a method for the fast computation of the eigenpairs of a bijective positive symmetric linear operator is presented. The method is based on a combination of operator adapted wavelets (ga...
Abstract: We present a method for the fast computation of the eigenpairs of a bijective positive symmetric linear operator $\mathcal{L}$. The method is based on a combination of operator adapted wavelets (ga...

Journal ArticleDOI
TL;DR: In this paper, the high-dimensional Hausdorff operators, defined via a general linear mapping, and their commutators on the weighted Morrey spaces in the setting of the Heisenberg group were studied.
Abstract: In this paper, we study the high-dimensional Hausdorff operators, defined via a general linear mapping $A$, and their commutators on the weighted Morrey spaces in the setting of the Heisenberg group. Particularly, under some assumption on the mapping $A$, we establish their sharp boundedness on the power weighted Morrey spaces.

Journal ArticleDOI
TL;DR: In this paper, the boundedness of Hausdorff operator defined by means of linear transformation A, on the weighted p-adic Morrey and weighted padic Herz type spaces was shown.
Abstract: In the present article, we come up with the boundedness of Hausdorff operator, defined by means of linear transformation A, on the weighted p-adic Morrey and weighted p-adic Herz type spaces. Also, by imposing some special conditions on A, we discuss the sharpness of the results presented in this article.

Journal ArticleDOI
TL;DR: This paper proves that the linear transformation on [Formula: see text] has the (asymptotic) average shadowing property if and only if [formula:see text] is hyperbolic, where [Formul: seetext] is a nonsingular matrix, giving a positive answer to a question in [Lee, 2012a].
Abstract: This paper proves that the linear transformation 𝒜(x) = Ax on ℂn has the (asymptotic) average shadowing property if and only if A is hyperbolic, where A is a nonsingular matrix, giving a positive a...

Journal ArticleDOI
TL;DR: This paper considers the weighted horizontal linear complementarity problem in the setting of Euclidean Jordan algebras and establishes some existence and uniqueness results and discusses the solvability of wHLCPs under nonzero (topological) degree conditions.
Abstract: A weighted complementarity problem is to find a pair of vectors belonging to the intersection of a manifold and a cone such that the product of the vectors in a certain algebra equals a given weight vector. If the weight vector is zero, we get a complementarity problem. Examples of such problems include the Fisher market equilibrium problem and the linear programming and weighted centering problem. In this paper we consider the weighted horizontal linear complementarity problem in the setting of Euclidean Jordan algebras and establish some existence and uniqueness results. For a pair of linear transformations on a Euclidean Jordan algebra, we introduce the concepts of $$\mathbf{R}_0$$ , $$\mathbf{R}$$ , and $$\mathbf{P}$$ properties and discuss the solvability of wHLCPs under nonzero (topological) degree conditions. A uniqueness result is stated in the setting of $${\mathbb {R}}^{n}$$ . We show how our results naturally lead to interior point systems.

Patent
14 Feb 2019
TL;DR: In this paper, a coding device for coding a moving image derives a prediction error in an image constituting the moving image by subtracting, from the image, a prediction image of the image on the basis of a linear transformation base.
Abstract: This disclosure provides a coding device and the like which are capable of appropriately performing processing relating to transformation. This coding device for coding a moving image derives a prediction error in an image constituting the moving image by subtracting, from the image, a prediction image of the image, on the basis of a linear transformation base that is a transformation base of a linear transformation to be performed on the prediction error, determines a quadratic transformation base that is a transformation base of a quadratic transformation to be performed on the result of the linear transformation, performs the linear transformation on the prediction error using the linear transformation base, performs the quadratic transformation on the result of the linear transformation using the quadratic transformation base, quantizes the result of the quadratic transformation, and codes, as image data, the result of the quantization.

Journal ArticleDOI
TL;DR: It is shown that the linear operator $S_\delta$ acting by averaging a function over a Hamming sphere of radius $\delta n$ has a dimension-independent bound on the norm $L_p \to L_2$ with $p = 1+(1-2 \delta)^2$.
Abstract: Consider the linear space of functions on the binary hypercube and the linear operator $S_\delta$ acting by averaging a function over a Hamming sphere of radius $\delta n$ around every point. It is...

Journal ArticleDOI
TL;DR: In this paper, it has been shown that the linearity assumption may be replaced by additivity, and that the Koldobsky theorem can be proved for real and complex spaces.

Journal ArticleDOI
TL;DR: A linear map between matrix spaces is positive if it maps positive semidefinite matrices to positive semi-definite ones, and is called completely positive if all its ampliations are positive as mentioned in this paper.
Abstract: A linear map between matrix spaces is positive if it maps positive semidefinite matrices to positive semidefinite ones, and is called completely positive if all its ampliations are positive. In this article quantitative bounds on the fraction of positive maps that are completely positive are proved. A main tool are real algebraic geometry techniques developed by Blekherman to study the gap between positive polynomials and sums of squares. Finally, an algorithm to produce positive maps which are not completely positive is given.

Journal ArticleDOI
Xiang Fan1
TL;DR: Up to linear transformations, this work gives a classification of all permutation polynomials of degree $7$ over $\mathbb{F}_{q}$ for any odd prime power $q$ with the help of the SageMath software.

Journal ArticleDOI
TL;DR: This paper aims to investigate the numerical approximation of a general second order parabolic stochastic partial differential equation driven by multiplicative and additive noise under more relaxed conditions and achieves optimal convergence orders which depend on the regularity of the noise and the initial data.

Journal ArticleDOI
TL;DR: It is shown that Chen et al's schemes for outsourcing linear regression computation to the cloud are not unquestioned and the linear transformation is very vulnerable to statistical attack.
Abstract: We show that Chen et al's schemes [IEEE TCC, 2(4), 2014, 499-508] for outsourcing linear regression computation to the cloud are not unquestioned In scheme 1, the client has to generate an orthogonal matrix Its computational complexity is almost equal to that of solving a linear regression problem locally In such case, the client has no necessary to outsource the computations to the cloud In scheme 2, it masks a matrix by multiplying two diagonal matrixes The linear transformation is very vulnerable to statistical attack

Journal ArticleDOI
TL;DR: In this paper, it was shown that a linear map φ : g → g is local automorphism if and only if it is an automomorphism or an anti-automorphism.

Journal ArticleDOI
TL;DR: This work greatly simplifies the representations in deep learning, and opens a possible route toward establishing a framework of modern neural networks which might be simpler and cheaper, but more efficient.
Abstract: A deep neural network is a parametrization of a multilayer mapping of signals in terms of many alternatively arranged linear and nonlinear transformations. The linear transformations, which are generally used in the fully connected as well as convolutional layers, contain most of the variational parameters that are trained and stored. Compressing a deep neural network to reduce its number of variational parameters but not its prediction power is an important but challenging problem toward the establishment of an optimized scheme in training efficiently these parameters and in lowering the risk of overfitting. Here we show that this problem can be effectively solved by representing linear transformations with matrix product operators (MPOs), which is a tensor network originally proposed in physics to characterize the short-range entanglement in one-dimensional quantum states. We have tested this approach in five typical neural networks, including FC2, LeNet-5, VGG, ResNet, and DenseNet on two widely used data sets, namely, MNIST and CIFAR-10, and found that this MPO representation indeed sets up a faithful and efficient mapping between input and output signals, which can keep or even improve the prediction accuracy with a dramatically reduced number of parameters. Our method greatly simplifies the representations in deep learning, and opens a possible route toward establishing a framework of modern neural networks which might be simpler and cheaper, but more efficient.