scispace - formally typeset
Search or ask a question
JournalISSN: 0923-6082

Multidimensional Systems and Signal Processing 

Springer Science+Business Media
About: Multidimensional Systems and Signal Processing is an academic journal published by Springer Science+Business Media. The journal publishes majorly in the area(s): Computer science & Filter (signal processing). It has an ISSN identifier of 0923-6082. Over the lifetime, 1169 publications have been published receiving 15023 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: Experimental results shows higher security via checking correlation, entropy, histogram, diffusion characteristic and key sensitivity of the proposed scheme.
Abstract: Due to social networks, demand for sharing multimedia data is significantly increased in last decade. However, lower complexity and frequent security breaches on public network such as Internet make it easy for eavesdroppers to approach the actual contents without any hurdle. Many encryption algorithms has been developed by researchers to increase the security of such traffic and make it difficult for eavesdroppers to access actual data. However, these traditional algorithms increase the communication overhead, computational cost and also do not provide security against new attacks. These issues in recent algorithms motivate the researchers to further explore this area and proposed such algorithms which have lower overhead, more efficiency than the existing techniques and equip with requirements of next generations multimedia networks. To address all these issues and keeping in mind the future of next generation multimedia networks, we proposed a secure and light-weight encryption scheme for digital images. The proposed technique initially divide plaintext image in a number of blocks and correlation coefficients of each block are then calculated. The block with the maximum correlation coefficient values are pixel-wise XORed with the random numbers generated from a skew tent map based on a pre-defined threshold value. At last, the whole image is permuted via two random sequences generated from TD-ERCS chaotic map. Experimental results shows higher security via checking correlation, entropy, histogram, diffusion characteristic and key sensitivity of the proposed scheme.

158 citations

Journal ArticleDOI
TL;DR: Results on the stability and convergence properties of a general class of iterative learning control schemes using, in the main, theory first developed for the branch of 2D linear systems known as linear repetitive processes are developed.
Abstract: This paper first develops results on the stability and convergence properties of a general class of iterative learning control schemes using, in the main, theory first developed for the branch of 2D linear systems known as linear repetitive processes. A general learning law that uses information from the current and a finite number of previous trials is considered and the results, in the form of fundamental limitations on the benefits of using this law, are interpreted in terms of basic systems theoretic concepts such as the relative degree and minimum phase characteristics of the example under consideration. Following this, previously reported powerful 2D predictive and adaptive control algorithms are reviewed. Finally, new iterative adaptive learning control laws which solve iterative learning control algorithms under weak assumptions are developed.

152 citations

Journal ArticleDOI
TL;DR: A full reference metric for quality assessment of stereoscopic images based on the binocular fusion process characterizing the 3D human perception is proposed and the difference of binocular energy has shown a high correlation with the human judgement for different impairments and is used to build the Binocular Energy Quality Metric (BEQM).
Abstract: Stereoscopic imaging is becoming very popular and its deployment by means of photography, television, cinema. . .is rapidly increasing. Obviously, the access to this type of images imposes the use of compression and transmission that may generate artifacts of different natures. Consequently, it is important to have appropriate tools to measure the quality of stereoscopic content. Several studies tried to extend well-known metrics, such as the PSNR or SSIM, to 3D. However, the results are not as good as for 2D images and it becomes important to have metrics dealing with 3D perception. In this work, we propose a full reference metric for quality assessment of stereoscopic images based on the binocular fusion process characterizing the 3D human perception. The main idea consists of the development of a model allowing to reproduce the binocular signal generated by simple and complex cells, and to estimate the associated binocular energy. The difference of binocular energy has shown a high correlation with the human judgement for different impairments and is used to build the Binocular Energy Quality Metric (BEQM). Extensive experiments demonstrated the performance of the BEQM with regards to literature.

131 citations

Journal ArticleDOI
TL;DR: This paper describes and investigates the performance of a compound-eye system recently reported in the literature, and explores several variations of the imaging system, such as the incorporation of a phase mask in extending the depth of field, which are not possible with a traditional camera.
Abstract: From consumer electronics to biomedical applications, device miniaturization has shown to be highly desirable. This often includes reducing the size of some optical systems. However, diffraction effects impose a constraint on image quality when we simply scale down the imaging parameters. Over the past few years, compound-eye imaging system has emerged as a promising architecture in the development of compact visual systems. Because multiple low-resolution (LR) sub-images are captured, post-processing algorithms for the reconstruction of a high-resolution (HR) final image from the LR images play a critical role in affecting the image quality. In this paper, we describe and investigate the performance of a compound-eye system recently reported in the literature. We discuss both the physical construction and the mathematical model of the imaging components, followed by an application of our super-resolution algorithm in reconstructing the image. We then explore several variations of the imaging system, such as the incorporation of a phase mask in extending the depth of field, which are not possible with a traditional camera. Simulations with a versatile virtual camera system that we have built verify the feasibility of these additions, and we also report the tolerance of the compound-eye system to variations in physical parameters, such as optical aberrations, that are inevitable in actual systems.

128 citations

Journal ArticleDOI
TL;DR: This paper proposes a simple algorithm for Tucker factorization of a tensor with missing data and its application to low-$$n$$n-rank tensor completion and demonstrates in several numerical experiments that the proposed algorithm performs well even when the ranks are significantly overestimated.
Abstract: The problem of tensor completion arises often in signal processing and machine learning. It consists of recovering a tensor from a subset of its entries. The usual structural assumption on a tensor that makes the problem well posed is that the tensor has low rank in every mode. Several tensor completion methods based on minimization of nuclear norm, which is the closest convex approximation of rank, have been proposed recently, with applications mostly in image inpainting problems. It is often stated in these papers that methods based on Tucker factorization perform poorly when the true ranks are unknown. In this paper, we propose a simple algorithm for Tucker factorization of a tensor with missing data and its application to low-$$n$$n-rank tensor completion. The algorithm is similar to previously proposed method for PARAFAC decomposition with missing data. We demonstrate in several numerical experiments that the proposed algorithm performs well even when the ranks are significantly overestimated. Approximate reconstruction can be obtained when the ranks are underestimated. The algorithm outperforms nuclear norm minimization methods when the fraction of known elements of a tensor is low.

125 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
202319
202265
202178
202080
2019102
201898