scispace - formally typeset
Search or ask a question

Showing papers on "Basis (linear algebra) published in 2012"


Book
02 Dec 2012
TL;DR: Huzinaga et al. as mentioned in this paper provided information pertinent to the Gaussian basis sets, with emphasis on lithium, radon, and important ions, and discussed the polarization functions prepared for lithium through radon for further improvement of the basis sets.
Abstract: Gaussian Basis Sets for Molecular Calculations-S. Huzinaga 2012-12-02 Physical Sciences Data, Volume 16: Gaussian Basis Sets for Molecular Calculations provides information pertinent to the Gaussian basis sets, with emphasis on lithium, radon, and important ions. This book discusses the polarization functions prepared for lithium through radon for further improvement of the basis sets. Organized into three chapters, this volume begins with an overview of the basis set for the most stable negative and positive ions. This text then explores the total atomic energies given by the basis sets. Other chapters consider the distinction between diffuse functions and polarization function. This book presents as well the exponents of polarization function. The final chapter deals with the Gaussian basis sets. This book is a valuable resource for chemists, scientists, and research workers.

1,798 citations


Journal ArticleDOI
TL;DR: In this paper, the authors employ the resolution of identity (RI) technique to facilitate the treatment of both the two-electron Coulomb repulsion integrals (required in all these approaches) as well as the linear density-response function (required for RPA and $GW$), which can in turn be expanded in a set of auxiliary basis functions (ABFs).
Abstract: Efficient implementations of electronic structure methods are essential for first-principles modeling of molecules and solids. We here present a particularly efficient common framework for methods beyond semilocal density-functional theory, including Hartree-Fock (HF), hybrid density functionals, random-phase approximation (RPA), second-order Moller-Plesset perturbation theory (MP2), and the $GW$ method. This computational framework allows us to use compact and accurate numeric atom-centered orbitals (popular in many implementations of semilocal density-functional theory) as basis functions. The essence of our framework is to employ the "resolution of identity (RI)" technique to facilitate the treatment of both the two-electron Coulomb repulsion integrals (required in all these approaches) as well as the linear density-response function (required for RPA and $GW$). This is possible because these quantities can be expressed in terms of products of single-particle basis functions, which can in turn be expanded in a set of auxiliary basis functions (ABFs). The construction of ABFs lies at the heart of the RI technique, and here we propose a simple prescription for constructing the ABFs which can be applied regardless of whether the underlying radial functions have a specific analytical shape (e.g., Gaussian) or are numerically tabulated. We demonstrate the accuracy of our RI implementation for Gaussian and NAO basis functions, as well as the convergence behavior of our NAO basis sets for the above-mentioned methods. Benchmark results are presented for the ionization energies of 50 selected atoms and molecules from the G2 ion test set as obtained with $GW$ and MP2 self-energy methods, and the G2-I atomization energies as well as the S22 molecular interaction energies as obtained with the RPA method.

462 citations


Journal ArticleDOI
TL;DR: It is shown that the construction of classical hierarchical B-splines can be suitably modified in order to define locally supported basis functions that form a partition of unity by reducing the support of basis functions defined on coarse grids, according to finer levels in the hierarchy of splines.

454 citations


Journal ArticleDOI
TL;DR: In this article, a local reduced-order base is proposed for nonlinear computational fluid and fluid-structure-electric interaction problems, which is particularly suited for problems characterized by different physical regimes, parameter variations, or moving features such as discontinuities and fronts.
Abstract: SUMMARY A new approach for the dimensional reduction via projection of nonlinear computational models based on the concept of local reduced-order bases is presented. It is particularly suited for problems characterized by different physical regimes, parameter variations, or moving features such as discontinuities and fronts. Instead of approximating the solution of interest in a fixed lower-dimensional subspace of global basis vectors, the proposed model order reduction method approximates this solution in a lower-dimensional subspace generated by most appropriate local basis vectors. To this effect, the solution space is partitioned into subregions, and a local reduced-order basis is constructed and assigned to each subregion offline. During the incremental solution online of the reduced problem, a local basis is chosen according to the subregion of the solution space where the current high-dimensional solution lies. This is achievable in real time because the computational complexity of the selection algorithm scales with the dimension of the lower-dimensional solution space. Because it is also applicable to the process of hyper reduction, the proposed method for nonlinear model order reduction is computationally efficient. Its potential for achieving large speedups while maintaining good accuracy is demonstrated for two nonlinear computational fluid and fluid-structure-electric interaction problems. Copyright © 2012 John Wiley & Sons, Ltd.

402 citations


Journal ArticleDOI
TL;DR: Under natural hypothesis on the set of all solutions to the problem obtained when the parameter varies, it is proved that three greedy algorithms converge; the last algorithm, based on the use of an a posteriori estimator, is the approach actually employed in the calculations.
Abstract: The convergence and efficiency of the reduced basis method used for the approximation of the solutions to a class of problems written as a parametrized PDE depends heavily on the choice of the elements that constitute the "reduced basis". The purpose of this paper is to analyze the a priori convergence for one of the approaches used for the selection of these elements, the greedy algorithm. Under natural hypothesis on the set of all solutions to the problem obtained when the parameter varies, we prove that three greedy algorithms converge; the last algorithm, based on the use of an a posteriori estimator, is the approach actually employed in the calculations.

308 citations


Posted Content
TL;DR: This work proposes a framework for multi-task learning that enables one to selectively share the information across the tasks, based on the assumption that task parameters within a group lie in a low dimensional subspace but allows the tasks in different groups to overlap with each other in one or more bases.
Abstract: In the paradigm of multi-task learning, mul- tiple related prediction tasks are learned jointly, sharing information across the tasks. We propose a framework for multi-task learn- ing that enables one to selectively share the information across the tasks. We assume that each task parameter vector is a linear combi- nation of a finite number of underlying basis tasks. The coefficients of the linear combina- tion are sparse in nature and the overlap in the sparsity patterns of two tasks controls the amount of sharing across these. Our model is based on on the assumption that task pa- rameters within a group lie in a low dimen- sional subspace but allows the tasks in differ- ent groups to overlap with each other in one or more bases. Experimental results on four datasets show that our approach outperforms competing methods.

288 citations


Journal ArticleDOI
TL;DR: A new algorithm is introduced, the PODEI-greedy algorithm, which constructs the reduced basis spaces for the empirical interpolation and for the numerical scheme in a synchronized way, and it is shown that the resulting reduced scheme is able to capture the evolution of both smooth and discontinuous solutions.
Abstract: We present a new approach to treating nonlinear operators in reduced basis approximations of parametrized evolution equations. Our approach is based on empirical interpolation of nonlinear differential operators and their Frechet derivatives. Efficient offline/online decomposition is obtained for discrete operators that allow an efficient evaluation for a certain set of interpolation functionals. An a posteriori error estimate for the resulting reduced basis method is derived and analyzed numerically. We introduce a new algorithm, the PODEI-greedy algorithm, which constructs the reduced basis spaces for the empirical interpolation and for the numerical scheme in a synchronized way. The approach is applied to nonlinear parabolic and hyperbolic equations based on explicit or implicit finite volume discretizations. We show that the resulting reduced scheme is able to capture the evolution of both smooth and discontinuous solutions. In case of symmetries of the problem, the approach realizes an automatic and intuitive space-compression or even space-dimensionality reduction. We perform empirical investigations of the error convergence and run-times. In all cases we obtain a good run-time acceleration.

235 citations



Journal ArticleDOI
TL;DR: In this paper, an inverse problem of recovering a spatially varying potential term in a one-dimensional time-fractional diffusion equation from the flux measurements taken at a single fixed time corresponding to a given set of input sources is studied.
Abstract: We study an inverse problem of recovering a spatially varying potential term in a one-dimensional time-fractional diffusion equation from the flux measurements taken at a single fixed time corresponding to a given set of input sources. The unique identifiability of the potential is shown for two cases, i.e. the flux at one end and the net flux, provided that the set of input sources forms a complete basis in L2(0, 1). An algorithm of the quasi-Newton type is proposed for the efficient and accurate reconstruction of the coefficient from finite data, and the injectivity of the Jacobian is discussed. Numerical results for both exact and noisy data are presented.

156 citations


Journal ArticleDOI
TL;DR: An approach in which a guarantee is given on the convergence rate thanks to an aggregation algorithm that allows an explicit control of the location of the eigenvalues of the preconditioned matrix is developed.
Abstract: We consider the iterative solution of large sparse linear systems arising from the upwind finite difference discretization of convection-diffusion equations. The system matrix is then an M-matrix with nonnegative row sum, and, further, when the convective flow has zero divergence, the column sum is also nonnegative, possibly up to a small correction term. We investigate aggregation-based algebraic multigrid methods for this class of matrices. A theoretical analysis is developed for a simplified two-grid scheme with one damped Jacobi postsmoothing step. An uncommon feature of this analysis is that it applies directly to problems with variable coefficients; e.g., to problems with recirculating convective flow. On the basis of this theory, we develop an approach in which a guarantee is given on the convergence rate thanks to an aggregation algorithm that allows an explicit control of the location of the eigenvalues of the preconditioned matrix. Some issues that remain beyond the analysis are discussed in the...

152 citations


Journal ArticleDOI
TL;DR: A Bayesian formulation that is ideally suited to combining information of physical and probabilistic natures is presented, which results in a robust regularization criterion with no more than one minimum.
Abstract: The reconstruction of acoustical sources from discrete field measurements is a difficult inverse problem that has been approached in different ways Classical methods (beamforming, near-field acoustical holography, inverse boundary elements, wave superposition, equivalent sources, etc) all consist—implicitly or explicitly—in interpolating the measurements onto some spatial functions whose propagation are known and in reconstructing the source field by retropropagation This raises the fundamental question as whether, for a given source topology and array geometry, there exists an optimal interpolation basis which minimizes the reconstruction error This paper provides a general answer to this question, by proceeding from a Bayesian formulation that is ideally suited to combining information of physical and probabilistic natures The main findings are the followings: (1) The optimal basis functions are the M eigen-functions of a specific continuous-discrete propagation operator, with M being the number of microphones in the array (2) The a priori inclusion of spatial information on the source field causes super-resolution according to a phenomenon coined “Bayesian focusing” (3) The approach is naturally endowed with an internal regularization mechanism and results in a robust regularization criterion with no more than one minimum (4) It admits classical methods as particular cases

Journal ArticleDOI
TL;DR: In this paper, the Wong sequences of subspaces are investigated and invoked to decompose the K n into V � ⊕ W �, where any bases of the linear spaces V � and W transform the matrix pencil into the Quasi-Weierstras form.

01 Jan 2012
TL;DR: This letter generalizes the theory of sparse representations of vectors to multiway arrays (tensors)—signals with a multidimensional structure—by using the Tucker model to derive a very fast and memory-efficient algorithm called N-BOMP (N-way block OMP), and theoretically demonstrates that under the block-sparsity assumption, this algorithm not only has a considerably lower complexity but is also more precise than the classic OMP algorithm.
Abstract: Recently, there is a great interest in sparse representations of signals under the assumption that signals (datasets) can be well approximated by a linear combination of few elements of a known basis (dictionary). Many algorithms have been developed to find such kind of representations for the case of one-dimensional signals (vectors) which involves to find the sparsest solution of an underdetermined linear system of algebraic equations. In this paper, we generalize the theory of sparse representations of vectors to multiway arrays (tensors), i.e. signals with a multidimensional structure, by using the Tucker model. Thus, the problem is reduced to solve a large-scale underdetermined linear system of equations possessing a Kronecker structure, for which we have developed a greedy algorithm called Kronecker-OMP as a generalization of the classical Orthogonal Matching Pursuit (OMP) algorithm for vectors. We also introduce the concept of mul

Journal ArticleDOI
TL;DR: It is shown that the full spatial entanglement is indeed accessible experimentally, and practicable radial detection modes with negligible cross correlations are found, which allows to demonstrate hybrid azimuthal-radial quantum correlations in a Hilbert space with more than 100 dimensions per photon.
Abstract: Spatially entangled twin photons allow the study of high-dimensional entanglement, and the Laguerre-Gauss modes are the most commonly used basis to discretize the single-photon mode spaces. In this basis, to date only the azimuthal degree of freedom has been investigated experimentally due to its fundamental and experimental simplicity. We show that the full spatial entanglement is indeed accessible experimentally; i.e., we have found practicable radial detection modes with negligible cross correlations. This allows us to demonstrate hybrid azimuthal-radial quantum correlations in a Hilbert space with more than 100 dimensions per photon.

Journal ArticleDOI
TL;DR: The analysis suggests that using the Chebyshev measure to precondition the ‘1-minimization, which has been shown to be numerically advantageous in one dimension in the literature, may in fact become less efficient in high dimensions.
Abstract: The idea of ‘1-minimization is the basis of the widely adopted compressive sensing method for function approximation. In this paper, we extend its application to high-dimensional stochastic collocation methods. To facilitate practical implementation, we employ orthogonal polynomials, particularly Legendre polynomials, as basis functions, and focus on the cases where the dimensionality is high such that one can not afford to construct high-degree polynomial approximations. We provide theoretical analysis on the validity of the approach. The analysis also suggests that using the Chebyshev measure to precondition the ‘1-minimization, which has been shown to be numerically advantageous in one dimension in the literature, may in fact become less efficient in high dimensions. Numerical tests are provided to examine the performance of the methods and validate the theoretical findings.

Journal ArticleDOI
TL;DR: In this article, a parametric deterministic formulation of Bayesian inverse problems with an input parameter from infinite-dimensional, separable Banach spaces is presented, and the sparsity of the posterior density in terms of the summability of the input data's coefficient sequence is analyzed.
Abstract: We present a parametric deterministic formulation of Bayesian inverse problems with an input parameter from infinite-dimensional, separable Banach spaces. In this formulation, the forward problems are parametric, deterministic elliptic partial differential equations, and the inverse problem is to determine the unknown, parametric deterministic coefficients from noisy observations comprising linear functionals of the solution. We prove a generalized polynomial chaos representation of the posterior density with respect to the prior measure, given noisy observational data. We analyze the sparsity of the posterior density in terms of the summability of the input data's coefficient sequence. The first step in this process is to estimate the fluctuations in the prior. We exhibit sufficient conditions on the prior model in order for approximations of the posterior density to converge at a given algebraic rate, in terms of the number N of unknowns appearing in the parametric representation of the prior measure. Similar sparsity and approximation results are also exhibited for the solution and covariance of the elliptic partial differential equation under the posterior. These results then form the basis for efficient uncertainty quantification, in the presence of data with noise.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the convergence of the correlation energy to the complete-basis-set (CBS) limit in methods utilizing plane-wave wave-function expansions and proposed several alternatives based on the momentum transfer vector, which greatly improved the rate of convergence.
Abstract: Using the finite simulation-cell homogeneous electron gas (HEG) as a model, we investigate the convergence of the correlation energy to the complete-basis-set (CBS) limit in methods utilizing plane-wave wave-function expansions. Simple analytic and numerical results from second-order Moller-Plesset theory (MP2) suggest a 1/M decay of the basis-set incompleteness error where M is the number of plane waves used in the calculation, allowing for straightforward extrapolation to the CBS limit. As we shall show, the choice of basis-set truncation when constructing many-electron wave functions is far from obvious, and here we propose several alternatives based on the momentum transfer vector, which greatly improve the rate of convergence. This is demonstrated for a variety of wave-function methods, from MP2 to coupled-cluster doubles theory and the random-phase approximation plus second-order screened exchange. Finite basis-set energies are presented for these methods and compared with exact benchmarks. A transformation can map the orbitals of a general solid state system onto the HEG plane-wave basis and thereby allow application of these methods to more realistic physical problems. We demonstrate this explicitly for solid and molecular lithium hydride.

Journal ArticleDOI
TL;DR: An approach that combines a suitable low‐dimensional parametrization of the geometry with reduced basis methods to solve systems of parametrized partial differential equations is developed and applied to find the optimal shape of aorto‐coronaric bypass anastomoses based on vorticity minimization in the down‐field region.
Abstract: In this paper we further develop an approach previously introduced in [Lassila and Rozza, C.M.A.M.E 2010] for shape optimization that combines a suitable low-dimensional parametrization of the geometry (yielding a geometrical reduction) with reduced basis methods (yielding a reduction of computational complexity). More precisely, free-form deformation techniques are considered for the geometry description and its parametrization, while reduced basis methods are used upon a finite element discretization to solve systems of parametrized partial differential equations. This allows an efficient flow field computation and cost functional evaluation during the iterative optimization procedure, resulting in effective computational savings with respect to usual shape optimization strategies. This approach is very general and can be applied to a broad variety of problems. In this paper we apply it to find the optimal shape of aorto-coronaric bypass anastomoses based on vorticity minimization in the down-field region. Blood flows in the coronary arteries are modelled using Stokes equations; afterwards, results have been verified in feedback using Navier-Stokes equations

Journal ArticleDOI
TL;DR: This algorithm uses the Gröbner basis method to determine the basis for integrand-level reduction, the primary decomposition of an ideal to classify all inequivalent solutions of unitarity cuts, and the resulting basis and cut solutions can be used to reconstruct the integrand from unitarity cutting techniques.
Abstract: We present an algorithm for the integrand-level reduction of multi-loop amplitudes of renormalizable field theories, based on computational algebraic geometry. This algorithm uses (1) the Grobner basis method to determine the basis for integrand-level reduction, (2) the primary decomposition of an ideal to classify all inequivalent solutions of unitarity cuts. The resulting basis and cut solutions can be used to reconstruct the integrand from unitarity cuts, via polynomial fitting techniques. The basis determination part of the algorithm has been implemented in the Mathematica package, BasisDet. The primary decomposition part can be readily carried out by algebraic geometry softwares, with the output of the package BasisDet. The algorithm works in both D = 4 and D = 4 − 2ϵ dimensions, and we present some two and three-loop examples of applications of this algorithm.

Posted Content
TL;DR: In this paper, a new greedy strategy for obtaining good spaces was given in the context of the reduced basis method for solving a parametric family of PDEs, which can also be applied to the same greedy procedure in general Banach spaces.
Abstract: Given a Banach space X and one of its compact sets F, we consider the problem of finding a good n dimensional space X_n \subset X which can be used to approximate the elements of F. The best possible error we can achieve for such an approximation is given by the Kolmogorov width d_n(F)_X. However, finding the space which gives this performance is typically numerically intractable. Recently, a new greedy strategy for obtaining good spaces was given in the context of the reduced basis method for solving a parametric family of PDEs. The performance of this greedy algorithm was initially analyzed in A. Buffa, Y. Maday, A.T. Patera, C. Prud'homme, and G. Turinici, "A Priori convergence of the greedy algorithm for the parameterized reduced basis", M2AN Math. Model. Numer. Anal., 46(2012), 595-603 in the case X = H is a Hilbert space. The results there were significantly improved on in P. Binev, A. Cohen, W. Dahmen, R. DeVore, G. Petrova, and P. Wojtaszczyk, "Convergence rates for greedy algorithms in reduced bases Methods", SIAM J. Math. Anal., 43 (2011), 1457-1472. The purpose of the present paper is to give a new analysis of the performance of such greedy algorithms. Our analysis not only gives improved results for the Hilbert space case but can also be applied to the same greedy procedure in general Banach spaces.

Journal ArticleDOI
Federico Chavez1, Claude Duhr1
TL;DR: In this paper, the authors studied one and two-loop triangle integrals with massless propagators and all external legs off shell and showed that there is a kinematic region where the results can be expressed in terms of a basis of single-valued polylogarithms in one complex variable.
Abstract: We study one and two-loop triangle integrals with massless propagators and all external legs off shell. We show that there is a kinematic region where the results can be expressed in terms of a basis of single-valued polylogarithms in one complex variable. The relevant space of single-valued functions can be determined a priori and the results take strikingly a simple and compact form when written in terms of this basis. We study the properties of the basis functions and illustrate how one can easily analytically continue our results to all kinematic regions where the external masses have the same sign.

Journal ArticleDOI
TL;DR: It is shown that every linear optical component can be completely described as a device that converts one set of orthogonal input modes, one by one, to a matching set of Orthogonal output modes.
Abstract: We show that every linear optical component can be completely described as a device that converts one set of orthogonal input modes, one by one, to a matching set of orthogonal output modes. This result holds for any linear optical structure with any specific variation in space and/or time of its structure. There are therefore preferred orthogonal "mode converter" basis sets of input and output functions for describing any linear optical device, in terms of which the device can be described by a simple diagonal operator. This result should help us understand what linear optical devices we can and cannot make. As illustrations, we use this approach to derive a general expression for the alignment tolerance of an efficient mode coupler and to prove that loss-less combining of orthogonal modes is impossible.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the sources of error in the initiator adaptation to full configuration interaction quantum Monte Carlo (i-FCIQMC), with a view to accelerating convergence.
Abstract: Using the homogeneous electron gas (HEG) as a model, we investigate the sources of error in the “initiator” adaptation to full configuration interaction quantum Monte Carlo (i-FCIQMC), with a view to accelerating convergence. In particular, we find that the fixed-shift phase, where the walker number is allowed to grow slowly, can be used to effectively assess stochastic and initiator error. Using this approach we provide simple explanations for the internal parameters of an i-FCIQMC simulation. We exploit the consistent basis sets and adjustable correlation strength of the HEG to analyze properties of the algorithm, and present finite basis benchmark energies for N = 14 over a range of densities 0.5 ⩽ rs ⩽ 5.0 a.u. A single-point extrapolation scheme is introduced to produce complete basis energies for 14, 38, and 54 electrons. It is empirically found that, in the weakly correlated regime, the computational cost scales linearly with the plane wave basis set size, which is justifiable on physical grounds. ...

Journal ArticleDOI
TL;DR: In this article, the authors introduce a generalized framework for sampling and reconstruction in separable Hilbert spaces, and show that it is always possible to reconstruct a vector in an arbitrary Riesz basis from sufficiently many of its samples in any other Riez basis.
Abstract: We introduce a generalized framework for sampling and reconstruction in separable Hilbert spaces. Specifically, we establish that it is always possible to stably reconstruct a vector in an arbitrary Riesz basis from sufficiently many of its samples in any other Riesz basis. This framework can be viewed as an extension of the well-known consistent reconstruction technique (Eldar et al.). However, whilst the latter imposes stringent assumptions on the reconstruction basis, and may in practice be unstable, our framework allows for recovery in any (Riesz) basis in a manner that is completely stable. Whilst the classical Shannon Sampling Theorem is a special case of our theorem, this framework allows us to exploit additional information about the approximated vector (or, in this case, function), for example sparsity or regularity, to design a reconstruction basis that is better suited. Examples are presented illustrating this procedure.

Journal ArticleDOI
TL;DR: A new adaptive sampling procedure for saddle point problems constructing approximation spaces that are stable and, compared to earlier approaches, computationally much more efficient.
Abstract: We present reduced basis approximations and associated rigorous a posteriori error bounds for parametrized saddle point problems. First, we develop new a posteriori error estimates that, unlike earlier approaches, provide upper bounds for the errors in the approximations of the primal variable and the Lagrange multiplier separately. The proposed method is an application of Brezzi's theory for saddle point problems to the reduced basis context and exhibits significant advantages over existing methods. Second, based on an analysis of Brezzi's theory, we compare several options for the reduced basis approximation space from the perspective of approximation stability and computational cost. Finally, we introduce a new adaptive sampling procedure for saddle point problems constructing approximation spaces that are stable and, compared to earlier approaches, computationally much more efficient. The method is applied to a Stokes flow problem in a two-dimensional channel with a parametrized rectangular obstacle. ...

Proceedings ArticleDOI
29 Oct 2012
TL;DR: This paper is the first to investigate the problem of efficient retrieval of recommendations in a MF framework and proposes two techniques for efficient search, which are fairly independent of each other and hence are easily combined to further improve recommendation retrieval efficiency.
Abstract: Low-rank Matrix Factorization (MF) methods provide one of the simplest and most effective approaches to collaborative filtering. This paper is the first to investigate the problem of efficient retrieval of recommendations in a MF framework. We reduce the retrieval in a MF model to an apparently simple task of finding the maximum dot-product for the user vector over the set of item vectors. However, to the best of our knowledge the problem of efficiently finding the maximum dot-product in the general case has never been studied. To this end, we propose two techniques for efficient search -- (i) We index the item vectors in a binary spatial-partitioning metric tree and use a simple branch and-bound algorithm with a novel bounding scheme to efficiently obtain exact solutions. (ii) We use spherical clustering to index the users on the basis of their preferences and pre-compute recommendations only for the representative user of each cluster to obtain extremely efficient approximate solutions. We obtain a theoretical error bound which determines the quality of any approximate result and use it to control the approximation. Both these simple techniques are fairly independent of each other and hence are easily combined to further improve recommendation retrieval efficiency. We evaluate our algorithms on real-world collaborative-filtering datasets, demonstrating more than ×7 speedup (with respect to the naive linear search) for the exact solution and over ×250 speedup for approximate solutions by combining both techniques.

Posted Content
TL;DR: In this paper, a convex demixing framework based on convex optimization is proposed to solve the problem of identifying two structured signals given only the sum of the two signals and prior information about their structures.
Abstract: Demixing refers to the challenge of identifying two structured signals given only the sum of the two signals and prior information about their structures. Examples include the problem of separating a signal that is sparse with respect to one basis from a signal that is sparse with respect to a second basis, and the problem of decomposing an observed matrix into a low-rank matrix plus a sparse matrix. This paper describes and analyzes a framework, based on convex optimization, for solving these demixing problems, and many others. This work introduces a randomized signal model which ensures that the two structures are incoherent, i.e., generically oriented. For an observation from this model, this approach identifies a summary statistic that reflects the complexity of a particular signal. The difficulty of separating two structured, incoherent signals depends only on the total complexity of the two structures. Some applications include (i) demixing two signals that are sparse in mutually incoherent bases; (ii) decoding spread-spectrum transmissions in the presence of impulsive errors; and (iii) removing sparse corruptions from a low-rank matrix. In each case, the theoretical analysis of the convex demixing method closely matches its empirical behavior.

Journal ArticleDOI
TL;DR: In this paper, a simple and efficient method to reconstruct an element of a Hilbert space in terms of an arbitrary finite collection of linearly independent reconstruction vectors, given a finite number of its samples with respect to any Riesz basis, is introduced.

Book ChapterDOI
01 Jan 2012
TL;DR: This paper surveys results on the NP-hard mixed-integer quadratically constrained programming problem and discusses relaxations and inequalities arising from the algebraic description of the problem as well as from dynamic procedures based on disjunctive programming.
Abstract: This paper surveys results on the NP-hard mixed-integer quadratically constrained programming problem. The focus is strong convex relaxations and valid inequalities, which can become the basis of efficient global techniques. In particular, we discuss relaxations and inequalities arising from the algebraic description of the problem as well as from dynamic procedures based on disjunctive programming. These methods can be viewed as generalizations of techiniques for mixed-integer linear programming. We also present brief computational results to indicate the strength and computational requirements of these methods.

Journal ArticleDOI
TL;DR: Results computed on some combinations of 2D and 3D geometries representing cardiovascular networks show the advantage of the RBHM in terms of reduced computational costs and the quality of the coupling to guarantee continuity of both stresses, pressure and velocity at subdomain interfaces.