scispace - formally typeset
Search or ask a question
Topic

Sparse matrix

About: Sparse matrix is a research topic. Over the lifetime, 13025 publications have been published within this topic receiving 393290 citations. The topic is also known as: sparse array.


Papers
More filters
Proceedings ArticleDOI
04 Nov 2010
TL;DR: In order to reduce communication expenses between multiple processors, two solutions are offered and a method of how to calculate the n-th power of massive matrix is proposed.
Abstract: This paper firstly gives a description of the traditional distribution schemes of parallel matrix multiplication and an analysis of their efficiencies. In order to reduce communication expenses between multiple processors, two solutions are offered in this paper, and the corresponding specific implementation models are also listed. Then a method of how to calculate the n-th power of massive matrix is proposed. Finally, the experimental results show that this method of calculating the n-th power of massive matrix can reduce the execution time and improve the efficiency.

1 citations

Proceedings ArticleDOI
03 May 2016
TL;DR: This paper has employed algebraic geometry codes to construct low coherence non-binary sensing matrices employing maximal curves and shows that these matrices outperform the Gaussian matrices in terms of noiseless and noisy signal recovery.
Abstract: Designing a measurement matrix is a principal problem in the theory of compressive sensing. Channel coding generator matrices are commonly employed to design a measurement matrix. Based on authors knowledge, a few studies have been made to connect algebraic geometry codes and compressive sensing. In this paper, we have employed algebraic geometry codes to construct low coherence non-binary sensing matrices. With this method, we have introduced a new group of deterministic sensing matrices employing maximal curves. Comparison of Gaussian matrix and proposed matrices shows that these matrices outperform the Gaussian matrices in terms of noiseless and noisy signal recovery.

1 citations

Book ChapterDOI
20 Jun 1994
TL;DR: This paper describes how this approach to SVD computation of bidiagonal matrices can be implemented efficiently on the Connection Machine CM-5/CM-5E andumerical results illustrate that the approach considered yields accurate singular values as well as good performance.
Abstract: The Singular Value Decomposition (SVD) is an algorithm that plays an essential role in many applications. There is a need for fast SVD algorithms in applications such as signal processing that require the SVD to be obtained or updated in real time. One technique for obtaining the SVD of a real dense matrix is to first reduce the dense matrix to bidiagonal form and then compute the SVD of the bidiagonal matrix. In this paper we describe how this approach can be implemented efficiently on the Connection Machine CM-5/CM-5E. Timing results show that use of the described techniques yields up to 45% of peak performance in the reduction from dense to bidiagonal form. Numerical results regarding the SVD computation of bidiagonal matrices illustrate that the approach considered yields accurate singular values as well as good performance. We also discuss the dependence between the accuracy of the singular values and the accuracy of the singular vectors.

1 citations

Journal ArticleDOI
TL;DR: In this paper, the authors proposed two heuristics for bandwidth reduction of large-scale sparse matrices in serial computations based on the Fast Node Centroid Hill-Climbing algorithm and the iterated local search metaheuristic.
Abstract: This paper considers the bandwidth reduction problem for large-scale sparse matrices in serial computations. A heuristic for bandwidth reduction reorders the rows and columns of a given sparse matrix. Thus, the method places entries with a nonzero value as close to the main diagonal as possible. Bandwidth optimization is a critical issue for many scientific and engineering applications. This manuscript proposes two heuristics for the bandwidth reduction of large-scale matrices. The first is a variant of the Fast Node Centroid Hill-Climbing algorithm, and the second is an algorithm based on the iterated local search metaheuristic. This paper then experimentally compares the solutions yielded by the new reordering algorithms with the bandwidth solutions delivered by state-of-the-art heuristics for the problem, including tests on large-scale problem matrices. A considerable number of results for a range of realistic test problems showed that the performance of the two new algorithms compared favorably with state-of-the-art heuristics for bandwidth reduction. Specifically, the variant of the Fast Node Centroid Hill-Climbing algorithm yielded the overall best bandwidth results.

1 citations

Journal ArticleDOI
Árpád Szlávik1
01 Dec 2001
TL;DR: A new general solution method is derived for the general GI/G/1 type processes --- for the steady-state distribution of infinite block-structured Markov chains with repetitive structure using matrix addition and matrix multiplication only.
Abstract: A new general solution method is derived for the general GI/G/1 type processes --- for the steady-state distribution of infinite block-structured Markov chains with repetitive structure. While matrix inversion is needed in each iterational step of other general (and of more special) matrix analytical procedures, the method presented here uses matrix addition and matrix multiplication only. In exchange, the computational complexity and the memory requirement is increasing in each iterational step of the proposed method. This paper, however, lays priority on the theoretical aspect of the general solution.

1 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
86% related
Artificial neural network
207K papers, 4.5M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Deep learning
79.8K papers, 2.1M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023103
2022312
2021595
2020668
2019710
2018880