scispace - formally typeset
Search or ask a question

Showing papers in "Multidimensional Systems and Signal Processing in 2019"


Journal ArticleDOI
TL;DR: Experimental results shows higher security via checking correlation, entropy, histogram, diffusion characteristic and key sensitivity of the proposed scheme.
Abstract: Due to social networks, demand for sharing multimedia data is significantly increased in last decade. However, lower complexity and frequent security breaches on public network such as Internet make it easy for eavesdroppers to approach the actual contents without any hurdle. Many encryption algorithms has been developed by researchers to increase the security of such traffic and make it difficult for eavesdroppers to access actual data. However, these traditional algorithms increase the communication overhead, computational cost and also do not provide security against new attacks. These issues in recent algorithms motivate the researchers to further explore this area and proposed such algorithms which have lower overhead, more efficiency than the existing techniques and equip with requirements of next generations multimedia networks. To address all these issues and keeping in mind the future of next generation multimedia networks, we proposed a secure and light-weight encryption scheme for digital images. The proposed technique initially divide plaintext image in a number of blocks and correlation coefficients of each block are then calculated. The block with the maximum correlation coefficient values are pixel-wise XORed with the random numbers generated from a skew tent map based on a pre-defined threshold value. At last, the whole image is permuted via two random sequences generated from TD-ERCS chaotic map. Experimental results shows higher security via checking correlation, entropy, histogram, diffusion characteristic and key sensitivity of the proposed scheme.

158 citations


Journal ArticleDOI
TL;DR: A combined non-convex higher order total variation with overlapping group sparse regularizer for blocky artifact removal and an iteratively re-weighted alternating direction method of multipliers algorithm to deal with the constraints and subproblems are developed.
Abstract: It is widely known that the total variation image restoration suffers from the stair casing artifacts which results in blocky restored images. In this paper, we address this problem by proposing a combined non-convex higher order total variation with overlapping group sparse regularizer. The hybrid scheme of both the overlapping group sparse and the non-convex higher order total variation for blocky artifact removal is complementary. The overlapping group sparse term tends to smoothen out blockiness in the restored image more globally, while the non-convex higher order term tends to smoothen parts that are more local to texture while preserving sharp edges. To solve the proposed image restoration model, we develop an iteratively re-weighted $$\ell _1$$ based alternating direction method of multipliers algorithm to deal with the constraints and subproblems. In this study, the images are degraded with different levels of Gaussian noise. A comparative analysis of the proposed method with the overlapping group sparse total variation, the Lysaker, Lundervold and Tai model, the total generalized variation and the non-convex higher order total variation, was carried out for image denoising. The results in terms of peak signal-to-noise ratio and structure similarity index measure show that the proposed method gave better performance than the compared algorithms.

39 citations


Journal ArticleDOI
TL;DR: The proposed technique for color retinal image enhancement is appeared to accomplish superior image enhancement with sufficient contrast enhancement, these enhancement results are better than other related techniques.
Abstract: Retinal imaging is used to diagnose common eye diseases. But retinal images that suffer from image blurring, uneven illumination and low contrast become useless for further diagnosis by automated systems. In this work, we have proposed a new method for overall contrast enhancement of the color retinal images. Initially, a gain matrix of luminance values which is obtained by adaptive gamma correction method is used to enhance all three color channels of the images. After that quantile-based histogram equalization is used to enhance overall visibility of the images. Enhancement results of the proposed method are compared with several other existing methods. Performance of the proposed method is evaluated on all images of publicly available Messidor database. Based on the assessment measure we have shown that the proposed method is able to enhance the contrast of given color retinal image without changing its structural information. The proposed technique is appeared to accomplish superior image enhancement with sufficient contrast enhancement, these enhancement results are better than other related techniques. This technique for color retinal image enhancement might be utilized to help ophthalmologists in the more productive screening of retinal ailments, what’s more, being developed of enhanced robotized image examination for clinical finding.

38 citations


Journal ArticleDOI
TL;DR: The introduced approach selects the most discriminative joints of a skeleton model in considered classification problem in a binary or fuzzy way using hill climbing and genetic search strategies as well as DTW transform based evaluation.
Abstract: The paper is a comprehensive study on classification of motion capture data on the basis of dynamic time warping (DTW) transform. It presents both theoretical description of all applied and newly proposed methods and experimentally obtained results on real dataset of human gait with 436 samples of 30 males. The recognition is carried out by the classical DTW nearest neighbors classifier and introduced DTW minimum distance scheme. Class prototypes are determined on the basis of DTW alignment and chosen methods of averaging rotations represented by Euler angles and unit quaternions. In the basic classification approach the whole pose configuration space is taken into account. The influence of different rotation distance functions operating on Euler angles and unit quaternions, on an obtained accuracy of recognition is investigated. What is more, a differential filtering in time domain which approximates angular velocities and accelerations of subsequent joints is utilized. Because in the case of unit quaternions representing rotations classical subtraction is unworkable, the differential filtering based on a product with a conjugated quaternion is applied. The main contribution of the paper is also related to the proposed and successfully validated approach of an exploration of pose configuration space. It selects the most discriminative joints of a skeleton model in considered classification problem in a binary or fuzzy way. The introduced approach utilizes hill climbing and genetic search strategies as well as DTW transform based evaluation. The selection makes the recognition more efficient and reduces pose signatures.

36 citations


Journal ArticleDOI
TL;DR: The proposed ATSHE scheme due to its adaptive nature of threshold selection can successfully enhance images under oodles of weak illumination situations such as backlighting effects, non-uniform illumination low contrast and dark images.
Abstract: In this paper, a new adaptive thresholding based sub-histogram equalization (ATSHE) scheme is proposed for contrast enhancement and brightness preservation with retention of basic image features. The histogram of an input image is divided into different sub-histogram using adaptive thresholding intensity values. The number of threshold values or sub-histograms of the image are not fixed, but depends on the peak signal-to-noise ratio (PSNR) of the thresholded image. Histogram clipping is also used here to control the undesired enhancement of resultant image thus avoiding over-enhancement. Median value of the original histogram gives the threshold value of clipping process. The main objective of proposed method is to improve contrast enhancement with preservation of mean brightness value, structural similarity index (SSIM) and information content of the images. Image contrast enhancement is examined by well-known enhancement assessment parameters such as contrast per pixel and modified measure of enhancement. The mean brightness preservation of the image is evaluated by using absolute mean brightness error value and feature preservation qualities are checked through SSIM and PSNR values. Through the proposed routine, the enhanced images achieve a good trade-off between features enhancement, low contrast boosting and brightness preservation in addition with the natural feel of the original image. In particular, the proposed ATSHE scheme due to its adaptive nature of threshold selection can successfully enhance images under oodles of weak illumination situations such as backlighting effects, non-uniform illumination low contrast and dark images.

35 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a multi-path CNN architecture to detect lung cancer from computed tomography (CT) images, where the suspicious nodules are generated with the modified version of U-Net and then the generated nodules become an input data for the model.
Abstract: Lung cancer is the leading cause of death among cancer-related death. Like other cancers, the finest solution for lung cancer diagnosis and treatment is early screening. Automatic CAD system of lung cancer screening from Computed Tomography scan mainly involves two steps: detect all suspicious pulmonary nodules and evaluate the malignancy of the nodules. Recently, there are many works about the first step, but rare about the second step. Since the presence of pulmonary nodules does not absolutely specify cancer, the morphology of nodules such as shape, size, and contextual information has a sophisticated relationship with cancer, the screening of lung cancer needs a careful investigation on each suspicious nodule and integration of information of all nodules. We propose deep CNN architecture which differs from those traditionally used in computer vision to solve this problem. First, the suspicious nodules are generated with the modified version of U-Net and then the generated nodules become an input data for our model. The proposed model is a multi-path CNN which exploits both local features as well as more global contextual features simultaneously to automatically detect lung cancer. To this end, the model used three paths, each path employed different receptive field size which helps to model distant dependencies (short and long-range dependencies of the neighboring pixels). Then, to further upgrade our model performance, we concatenate features from the three paths. This balance the receptive field size effect and makes our model more adaptable to the variability of shape, size, and contextual information among nodules. Finally, we also introduce a retraining phase system that permits us to tackle difficulties related to the imbalance of image labels. Experimental results on Kaggle Data Science Bowl 2017 challenge shows that our model is better adaptable to the described inconsistency among nodules size and shape, and also obtained better detection results compared to the recently published state of the art methods.

33 citations


Journal ArticleDOI
TL;DR: For segmenting medical images with abundant noise, blurry boundaries, and intensity heterogeneities effectively, a hybrid active contour model that synthesizes the global information and the local information is proposed.
Abstract: For segmenting medical images with abundant noise, blurry boundaries, and intensity heterogeneities effectively, a hybrid active contour model that synthesizes the global information and the local information is proposed. A novel global energy functional is constructed, together with an adaptive weight by the statistical information of image pixels on the clustering idea. Minimizing this global energy functional in a variational level set formulation will drive the curve to desirable boundaries. The local energy functional contains the local threshold, which is used to correct the deviation of the level set function. Experiments demonstrate that the proposed method can segment synthetic and medical images effectively, and have a relatively higher performance compared to other representative methods.

32 citations


Journal ArticleDOI
TL;DR: The proposed techniques can separate between the defected and the healthy olive fruits, and then detect and classify the actual defected area, and have the highest accuracy rate among other techniques.
Abstract: One of the major concerns for fruit selling companies, at present, is to find an effective way for rapid classification and detection of fruit defects. Olive is one of the most important agricultural product, which receives great attention from fruit and vegetables selling companies, for its utilization in various industries such as oils and pickles industry. The small size and multiple colours of the olive fruit increases the difficulty of detecting the external defects. This paper presents new efficient methods for detecting and classifying automatically the external defects of olive fruits. The proposed techniques can separate between the defected and the healthy olive fruits, and then detect and classify the actual defected area. The proposed techniques are based on texture analysis and the homogeneity texture measure. The results and the performance of proposed techniques were compared with varies techniques such as Canny, Otsu, local binary pattern algorithm, K-means, and Fuzzy C-Means algorithms. The results reveal that proposed techniques have the highest accuracy rate among other techniques. The simplicity and the efficiency of the proposed techniques make them appropriate for designing a low-cost hardware kit that can be used for real applications.

31 citations


Journal ArticleDOI
TL;DR: A novel method for horizon detection that combines a multi-scale approach and a convolutional neural network and is the only one capable of detecting the horizon at high speed with high accuracy, which is attractive for practical applications.
Abstract: This paper proposes a novel method for horizon detection that combines a multi-scale approach and a convolutional neural network (CNN). The ability to detect the horizon is the first step toward situational awareness of autonomous ships, which have recently attracted interest, and greatly affects the performance of subsequent steps and that of the overall system. Since typical approaches for horizon detection mainly use edge information, two challenging issues need to be overcome: non-stability of edge detection and complex maritime scenes. The proposed method first detects line features by combining edge information from the various scales to reduce the computational time while mitigating the non-stability of edge detection. Subsequently, CNN is used to verify the edge pixels belonging to the horizon to process complex maritime scenes that contain line features similar to the horizon and changes in the sea status. Finally, linear curve fitting along with median filtering are iteratively used to estimate the horizon line accurately. We compared the performance of the proposed method with state-of-the-art methods using the largest database publicly available. The experimental results showed that the accuracy with which the proposed method can identify the horizon is superior to that of state-of-the-art methods. Our method has a median positional error of less than 1.7 pixels from the center of the horizon and a median angular error of approximately 0.1 $$^{\circ }$$ . Further, our results showed that our method is the only one capable of detecting the horizon at high speed with high accuracy, which is attractive for practical applications.

31 citations


Journal ArticleDOI
TL;DR: This work introduces a novel spatiotemporal video registration method capable of generating registered and temporally aligned infrared/visible-light video sequences, and improves the registration accuracy when compared to the state-of-the art.
Abstract: In general, the fusion of visible-light and infrared images produces a composite representation where both data are pictured in a single image. The successful development of image/video fusion algorithms relies on realistic infrared/visible-light datasets. To the best of our knowledge, there is a particular shortage of databases with registered and synchronized videos from the infrared and visible-light spectra suitable for image/video fusion research. To address this need we recorded an image/video fusion database using infrared and visible-light cameras under varying illumination conditions. Moreover, different scenarios have been defined to better challenge the fusion methods, with various contexts and contents providing a wide variety of meaningful data for fusion purposes, including non-planar scenes, where objects appear on different depth planes. However, there are several difficulties in creating datasets for research in infrared/visible-light image fusion. Camera calibration, registration, and synchronization can be listed as important steps of this task. In particular, image registration between imagery from sensors of different spectral bands imposes additional difficulties, as it is very challenging to solve the correspondence problem between such images. Motivated by these challenges, this work introduces a novel spatiotemporal video registration method capable of generating registered and temporally aligned infrared/visible-light video sequences. The proposed workflow improves the registration accuracy when compared to the state-of-the art. By applying the proposed methodology to the recorded database we have generated the visible-light and infrared video database for image fusion, a publicly available database to be used by the research community to test and benchmark fusion schemes.

28 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a method to derive computationally efficient approximations to the discrete cosine transform (DCT) by minimizing the angle between the rows of the exact DCT matrix and the columns of the approximated transformation matrix.
Abstract: The principal component analysis (PCA) is widely used for data decorrelation and dimensionality reduction. However, the use of PCA may be impractical in real-time applications, or in situations were energy and computing constraints are severe. In this context, the discrete cosine transform (DCT) becomes a low-cost alternative to data decorrelation. This paper presents a method to derive computationally efficient approximations to the DCT. The proposed method aims at the minimization of the angle between the rows of the exact DCT matrix and the rows of the approximated transformation matrix. The resulting transformations matrices are orthogonal and have extremely low arithmetic complexity. Considering popular performance measures, one of the proposed transformation matrices outperforms the best competitors in both matrix error and coding capabilities. Practical applications in image and video coding demonstrate the relevance of the proposed transformation. In fact, we show that the proposed approximate DCT can outperform the exact DCT for image encoding under certain compression ratios. The proposed transform and its direct competitors are also physically realized as digital prototype circuits using FPGA technology.

Journal ArticleDOI
TL;DR: A hysteresis thresholding, guided by some morphological operations has been employed to obtain the binary image excluding other unwanted areas of the blood vessel from its background, and a maximum accuracy and an average accuracy of 95.65 and 94.31% respectively have been achieved.
Abstract: The development of computer aided diagnosis system has a great impact on early and accurate disease diagnosis. The segmentation of retinal blood vessels aids in identifying the alteration in vessel structure and hence helps to diagnose many diseases such as diabetic retinopathy, glaucoma, hypertension along with some cardiovascular diseases. In this research work, a method is presented for the segmentation of retinal vessel structure from retinal fundus images. 2D wavelet transform assisted morphological gradient operation based ‘Contrast Limited Adaptive Histogram Equalization’ technique has been introduced for the preprocessing of the low contrast fundus images. Morphological gray level hit-or-miss transform with multi-structuring element with varying orientation has been proposed for the separation of blood vessel from its background. Finally a hysteresis thresholding, guided by some morphological operations has been employed to obtain the binary image excluding other unwanted areas. The proposed methodology has been tested on DRIVE database and a maximum accuracy and an average accuracy of 95.65 and 94.31% respectively have been achieved.

Journal ArticleDOI
TL;DR: A highly robust reversible image steganography model has been developed for secret information hiding and outperforms other wavelet transformation based approaches in terms of high PSNR, embedding capacity, imperceptibility etc.
Abstract: The recent advancement in computing technologies and resulting vision based applications has given rise to a novel practice called telemedicine that requires patient diagnosis images or allied information to recommend or even perform diagnosis practices being located remotely. However, to ensure accurate and optimal telemedicine there is the requirement of seamless or flawless biomedical information about patient. On the contrary, medical data transmitted over insecure channel often remains prone to manipulated or corrupted by attackers. The existing cryptosystems alone are not sufficient to deal with these issues and hence in this paper a highly robust reversible image steganography model has been developed for secret information hiding. Unlike traditional wavelet transform techniques, we incorporated Discrete Ripplet Transformation technique for message embedding in the medical cover images. In addition to, ensure seamless communication over insecure channel, a dual cryptosystem model containing proposed steganography scheme and RSA cryptosystem has been developed. One of the key novelties of the proposed research work is the use of adaptive genetic algorithm for optimal pixel adjustment process that enriches data hiding capacity as well as imperceptibility features. The performance assessment reveals that the proposed steganography model outperforms other wavelet transformation based approaches in terms of high PSNR, embedding capacity, imperceptibility etc.

Journal ArticleDOI
Jizhao Liu1, Shusen Tang1, Jing Lian1, Yide Ma1, Zhang Xinguo1 
TL;DR: A novel fourth order chaotic system is proposed, accompanied by analysis of Lyapunov exponent and bifurcations, which demonstrates that this scheme has a strong resistance against statistical attacks and differential attack.
Abstract: Today, medical imaging suffers from serious issues such as malicious tampering and privacy leakage. Encryption is an effective way to protect these images from security threats. Chaos has been widely used in image encryption, the majority of these algorithms are based on classical chaotic systems. For now, these systems are easy to analyze and predict, which are not sufficient for image encryption proposes. In this paper, a novel fourth order chaotic system is proposed, accompanied by analysis of Lyapunov exponent and bifurcations. Finally, the application of this system with medical image encryption is proposed. As this system could have six control parameters and four initial conditions, the key space is far greater than 5.1 × 218191, which is large enough to resist brute force attack. Correlation analysis and differential attack analysis further demonstrate that this scheme has a strong resistance against statistical attacks and differential attack.

Journal ArticleDOI
TL;DR: Data hiding and extraction is proposed for Audio Video Interleave videos, that embeds the image in Bitmap Image File, that has the secret information in a frame of the video by segmenting the bytes of the secret image and placing them in the video frame providing a higher level of encryption.
Abstract: Sensitive data is exchanged frequently through wired or wireless communication that are vulnerable to unauthorized interception. Cryptography is a solution to overcome this issue, but once decrypted the information secrecy does not exist. Apart from hiding data in an image, it can be extended for digital media. In this work, data hiding and extraction is proposed for Audio Video Interleave videos, that embeds the image in Bitmap Image File, that has the secret information in a frame of the video by segmenting the bytes of the secret image and placing them in the video frame providing a higher level of encryption. This novel method provides a two level encryption, thus to decipher the data, the way in which the secret image is originally decomposed and the frame in which it is embedded should be known. The quality of the secret image embedded and the size of the video is not altered before and after encryption of the secret data. The secret image may contain any multimedia data that can be further extracted and recognized.

Journal ArticleDOI
TL;DR: The experimental results show that the proposed algorithm is highly able to detect the fractures of the imaging logs successfully and the classification has better precision compared with other proposed algorithm.
Abstract: In this paper, the main goal is to identify the sine fractures of reservoir rock automatically. Therefore, a five-step algorithm is applied on the imaging logs. The first step consists of extracting the features of the imaging log by applying the Zernike moments. In the second step, the features are learned by using sparse coding. In the third step, the imaging log is segmented by using the self-organizing map neural network and the training dataset. In the fourth step, the fracture points are extracted by Steger method. In the last step, to determine the sine parameters of fractures, the Hough transform is applied to the image fracture points. The experimental results show that the proposed algorithm is highly able to detect the fractures of the imaging logs successfully. Also, the precision of the proposed method to extract the fracture pixels is so high and it has low sensitivity to noise in the imaging logs. In this paper, the proposed algorithm has been applied on the imaging datasets of FMI and the obtained results show that the classification has better precision compared with other proposed algorithm.

Journal ArticleDOI
TL;DR: This work has proposed an algorithm to obtain a global thresholding value for a particular image and used Differential Evolution algorithm embedded with OTSU method and trained neural network to find out an optimal threshold value.
Abstract: In the past few decades, medical imaging and soft computing have shown a symbolic growth in brain tumor segmentation. Research in medical imaging is becoming quite popular field, particularly in magnetic resonance images of brain tumor, because of the tremendous need of efficient and effective technique for evaluation of large amount of data. Image segmentation is considered as one of the most crucial techniques for visualizing tissues in human being. In considering brain tumor image segmentation, manually with an expert, it is more likely that the errors are present in it. To automate image segmentation, we have proposed an algorithm to obtain a global thresholding value for a particular image. To find out an optimal threshold value we have used Differential Evolution algorithm embedded with OTSU method and trained neural network for future use. Proposed Methodology provides classification of the images successfully for brain tumors. Results show its efficiency over other methods.

Journal ArticleDOI
TL;DR: A new fusion algorithm is proposed that optimally combines spectral information from MS image and spatial information from the PAN image of the same scene to create a single comprehensive fused image.
Abstract: Image fusion plays a vital role in providing better visualization of remotely sensed image data. Most earth observation satellites have sensors that provide both high spatial resolution panchromatic (PAN) images and low resolution multispectral (MS) images. In this paper, we propose a new fusion algorithm that optimally combines spectral information from MS image and spatial information from the PAN image of the same scene to create a single comprehensive fused image. As the performance of the fusion scheme relies on the choice of fusion rule, the proposed algorithm is based on a weighted averaging fusion rule that uses optimal weights obtained from brain storm optimization (BSO) algorithm for the fusion of high frequency and low frequency coefficients obtained by applying Curvelet transform to the source images. The objective function in BSO is formulated with twin objectives of maximizing the entropy and minimizing the root mean square error. The fusion results are compared with the existing fusion techniques, such as Brovey, principal component analysis, discrete wavelet transform, non sub-sampled contourlet transform, and intensity hue saturation. From the experimental results and analysis, the proposed fusion algorithm gives a better fusion performance in terms of subjective and objective measures than the traditional algorithms. As a benefit, the proposed fusion scheme preserves spectral information of the MS image with increased spatial resolution and edge information.

Journal ArticleDOI
TL;DR: An improved matching technique based on enhanced CMFD pipeline via k-means clustering technique that can enhance the detection accuracy in a significant manner and reduce the processing time with LSH-based matching.
Abstract: The goal of copy-move forgery is to manipulate the semantics of an image. In fact, this can be performed by cloning a region of an image and subsequently pasting it onto a different region within the same image. As such, this paper proposes an improved matching technique based on enhanced CMFD pipeline via k-means clustering technique. By deploying the k-means clustering to group the overlapping blocks, the matching step was independently carried out within each cluster to speed up the matching process. In addition, the clustering step of the feature vectors allowed the matching process to identify the matches accurately. Thus, in order to test the enhanced pipeline, it was combined with Zernike moments and locality sensitive hashing (LSH). The experimental results proved that the proposed method can enhance the detection accuracy in a significant manner. On top of that, the proposed pipeline can reduce the processing time with LSH-based matching.

Journal ArticleDOI
TL;DR: A fast-convergence trilinear decomposition approach, which uses propagator method (PM) as the initialization of the angle estimation to speed the convergence of tril inear decompose.
Abstract: In this paper, we investigate the problem of two-dimensional (2D) direction of arrival (DOA) estimation of multiple signals for generalized coprime planar arrays consisting of two rectangular uniform planar subarrays. We propose a fast-convergence trilinear decomposition approach, which uses propagator method (PM) as the initialization of the angle estimation to speed the convergence of trilinear decomposition. The received signal of each subarray can be fitted into a trilinear model or parallel factor (PARAFAC) model so that the trilinear alternating least square algorithm can be used to estimate the angle information. Meanwhile, the necessary initialization of DOA estimates can be achieved via PM, which endows the proposed approach a fast convergence and subsequently results in a low complexity. Specifically, we eliminate the ambiguous estimates by utilizing the coprime property and the true DOA estimates can be achieved by selecting the nearest ones of all DOA estimates. The proposed approach can obtain the same estimation performance as the conventional PARAFAC algorithm, but with a low computational cost. Numerical simulation results are provided to validate the effectiveness and superiority of the proposed algorithm.

Journal ArticleDOI
TL;DR: The result indicates that IAR-MTD may effectively detect the weak moving targets with constant radial velocity and it is compatible with MTD radar system.
Abstract: In radar detection, weak targets’ range migration often happens during long time integration. To detect weak targets effectively, an improved axis rotation moving target detection (IAR-MTD) is introduced and analysed in detail. IAR-MTD can detect weak targets by compensating the linear part of range migration via the axis rotation and coherently integrating the echoes via moving target detection (MTD). Then the realization of IAR-MTD is derived. Furthermore, the coherent integration gain of IAR-MTD is analysed, which is better than that of traditional MTD, Radon–Fourier transform (RFT) and Keystone transform (KT). Subsequently, to decrease the computational complexity of IAR-MTD, some suggestions are given. Besides, unambiguous Doppler estimation, the tolerance of acceleration, and the multi-target detection of IAR-MTD are analysed respectively. Finally, some numerical experiments are provided to show the performance of IAR-MTD in different conditions and testify the advantages of IAR-MTD over MTD, RFT and KT. The result indicates that IAR-MTD may effectively detect the weak moving targets with constant radial velocity and it is compatible with MTD radar system.

Journal ArticleDOI
TL;DR: A mobile cloud based framework that detects and retrieves player statistics on a mobile phone during live cricket and turns the smartphones smarter by significantly reducing the execution burden and energy consumption of the smartphone.
Abstract: Smartphones are increasingly becoming popular due to the wide range of capabilities, such as Wi-Fi connectivity, video acquisition, and navigation. Some of these applications require large computational power, memory, and long battery life. Sports entertainment applications executed on smartphones is the future paradigm shift that will be enabled by the mobile cloud computing environments. Many times mobile users request multiple mobile services in workflows to fulfill their complex requirements. To investigate such issues, we develop a mobile cloud based framework that detects and retrieves player statistics on a mobile phone during live cricket. The proposed framework is divided into several services and each service is either executed locally or on the cloud. Our approach considers the dependencies among different services and aims to optimize the execution time and energy consumption for executing the services. Due to the applied offloading strategy, the proposed framework turns the smartphones smarter by significantly reducing the execution burden and energy consumption of the smartphone. Experimental results are promising and show feasibility of the proposed framework to be deployed in several related applications using techniques of computer vision and machine learning.

Journal ArticleDOI
TL;DR: Fast design of two-dimensional FIR filters in the least $${l}_{p}$$lp-norm sense is investigated and is shown to have a lower complexity than existing methods.
Abstract: Fast design of two-dimensional FIR filters in the least $${l}_{p}$$ -norm sense is investigated in this brief. The design problem is first formulated in a matrix form and then solved by a matrix-based iterative reweighted least squares algorithm. The proposed algorithm includes two loops: one for updating the weighting function and the other for solving the weighted least squares (WLS) subproblems. These WLS subproblems are solved using an efficient matrix-based WLS algorithm, which is an iterative procedure with its initial iterative matrix being the solution matrix in the last iteration, resulting in a considerable CPU-time saving. Through analysis, the new algorithm is shown to have a lower complexity than existing methods. Three design examples are provided to illustrate the high computational efficiency and design precision of the proposed algorithm.

Journal ArticleDOI
TL;DR: An integrated system combining multiple-input multiple-output radar with orthogonal frequency division multiplexing (OFDM) communication is designed, and the range and angle estimations with high resolution are obtained, whereas therange and angle are coupled.
Abstract: To perform the integration of radar and communication in waveform, we design an integrated system combining multiple-input multiple-output radar with orthogonal frequency division multiplexing (OFDM) communication. In this system, each antenna transmits the integrated waveform with a nonoverlapping block sub-frequency band. The utilized waveform is a variation of the classical OFDM communication waveform. In order to sufficiently exploit the entire system bandwidth and array aperture, a joint time and space processing approach is proposed, and hence the range and angle estimations with high resolution are obtained, whereas the range and angle are coupled. Moreover, the loss in processing gain and the Cramer–Rao bounds of range and angle estimates based on integrated waveform are derived, respectively. Theoretical analysis validates that the designed system is capable of implementing the radar and communication functions simultaneously. Finally, numerical results are presented to verify the effectiveness of the proposed approach.

Journal ArticleDOI
TL;DR: A new thresholding function has been proposed for despeckling of ultrasound images and it is observed that Symlet 8 outperforms the other wavelet filters.
Abstract: In the present work, a new thresholding function has been proposed for despeckling of ultrasound images The main limitation of ultrasound images is presence of speckle noise which degrades image quality and hampers interpretation of the image The proposed method has been first tested on the synthetic images so to analyse the performance of the proposed technique The synthetic images are degraded by adding speckle noise with different degrees of noise variance (001–02) so as to analyse its performance for various noise variances The proposed method is tested for orthogonal and biorthogonal wavelet filters It is observed that Symlet 8 outperforms the other wavelet filters The value of parameter ‘β’ is varied from 1 to 100 and its optimal value is selected which gives best results Comparison with already existing exponential thresholding method, universal thresholding method, wiener filter and sparse coding have been made and proposed technique has given improved results This method is tested on liver ultrasound images as well

Journal ArticleDOI
TL;DR: This paper proposes a new nonconvex approach to better approximate the rank function (MER), which is actually the Moreau envelope of the rankfunction (MER) which has an explicit expression and can be converted to an optimization problem with two variables.
Abstract: The problem of recovering a low-rank matrix from partial entries, known as low-rank matrix completion, has been extensively investigated in recent years. It can be viewed as a special case of the affine constrained rank minimization problem which is NP-hard in general and is computationally hard to solve in practice. One widely studied approach is to replace the matrix rank function by its nuclear-norm, which leads to the convex nuclear-norm minimization problem solved efficiently by many popular convex optimization algorithms. In this paper, we propose a new nonconvex approach to better approximate the rank function. The new approximation function is actually the Moreau envelope of the rank function (MER) which has an explicit expression. The new approximation problem of low-rank matrix completion based on MER can be converted to an optimization problem with two variables. We then adapt the proximal alternating minimization algorithm to solve it. The convergence (rate) of the proposed algorithm is proved and its accelerated version is also developed. Numerical experiments on completion of low-rank random matrices and standard image inpainting problems have shown that our algorithms have better performance than some state-of-art methods.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed parallel architecture dramatically reduces the computation time, and achieves lower bit-rate performances among the state-of-the-art methods.
Abstract: Filtering based compression methods have become a popular research topic in lossless compression of hyperspectral images. Recursive least squares (RLS) based prediction methods provide better decorrelation performance among the filtering based methods. In this paper, two superpixel segmentation based RLS methods, namely SuperRLS and B-SuperRLS, are investigated for lossless compression of hyperspectral images. The proposed methods present a novel parallelization approach for RLS based prediction method. In the first step of SuperRLS, superpixel segmentation is applied to hyperspectral image. Afterwards, hyperspectral image is partitioned into multiple small regions according to the superpixel boundaries. Each region is predicted with RLS method in parallel, and prediction residuals are encoded via arithmetic encoder. Additionally, superpixel based prediction approach provides region of interest compression capability. B-SuperRLS, which is bimodal version of SuperRLS, evaluates both spectral and spatio-spectral correlations for prediction. The performance of the proposed methods are exhaustively analysed in terms of superpixel number, input vector length and number of parallel nodes, used in the prediction. Experimental results show that the proposed parallel architecture dramatically reduces the computation time, and achieves lower bit-rate performances among the state-of-the-art methods.

Journal ArticleDOI
TL;DR: The experimental results demonstrate that the proposed anti-forensic methods provide superior results in terms of image visual quality and forensic undetectability as compared to the existing approaches, with slight increase in computational time.
Abstract: Median filtering has received considerable attention and popularity for image enhancement and anti-forensics. It can be utilized as an image denoising and smoothing tool to disguise the footprints of image processing operations such as image resampling and JPEG compression. A two-step median filtering anti-forensic framework is proposed in this paper to fool the existing median filtering forensic detectors by hiding the median filtering artifacts. In the proposed framework, a variational deconvolution approach is initially employed to generate a median filtered forgery. Now, this forgery is further processed in the second step by considering Total Variation (TV) based minimization optimization problem to eradicate the median filtering artifacts left during deconvolution operation. Moreover, the proposed TV-based minimization algorithms significantly reduce the unnatural (grainy) noise left during the variational deconvolution. Two types of TV-based minimization problems are suggested, first relies on the TV of energy by considering the image gradient and second on the structure of a given image. The performance of the proposed scheme is evaluated by considering the worst-case and optimal scenarios. The experimental results based on UCID and BOSSBase dataset images demonstrate that the proposed anti-forensic methods provide superior results in terms of image visual quality and forensic undetectability as compared to the existing approaches, with slight increase in computational time.

Journal ArticleDOI
TL;DR: Simulation results verify the performance of the proposed technique against various statistical attacks.
Abstract: An encryption algorithm based on sparse coding and compressive sensing is proposed. Sparse coding is used to find the sparse representation of images as a linear combination of atoms from an overcomplete learned dictionary. The overcomplete dictionary is learned using K-SVD, utilizing non-overlapping patches obtained from a set of images. Compressed sensing is used to sample data at a rate below the Nyquist rate. A Gaussian measurement matrix compressively samples the plain image. As these measurements are linear, chaos based permutation and substitution operations are performed to obtain the cipher image. Bit-level scrambling and block substitution is done to confuse and diffuse the measurements. Simulation results verify the performance of the proposed technique against various statistical attacks.

Journal ArticleDOI
TL;DR: From the comparative analysis it is proved that Jaya algorithm is better as compared to PSO algorithm under most types of attacks with higher magnitudes whereas identical under the lower magnitude of applied attacks.
Abstract: Nowadays copyright protection is mandatory in the field of image processing to removes the illegitimate utilization and imitation of digital images. The digital image watermarking is one of the most reliable methods for protecting the illegal validation of data. In this paper, singular value decomposition based digital image watermarking scheme is proposed in complex wavelet transform (CWT) domain using intelligence algorithms like particle swarm optimization (PSO) and recently proposed Jaya algorithm. The watermark image is embedded into high frequency CWT subband of cover image. At the time of watermark embedding and extraction, optimization algorithms Jaya and PSO are applied to improve the robustness and imperceptibility by assessing the fitness function. The perceptual quality of watermarked image and robustness of extracted watermark image are verified under the filtering, rotation, scaling, Gaussian noise and JPEG compression attacks. From the comparative analysis it is proved that Jaya algorithm is better as compared to PSO algorithm under most types of attacks with higher magnitudes whereas identical under the lower magnitude of applied attacks. Moreover, using variety of cover images, it is found that, the elapse time and value of fitness function given by Jaya algorithm are also better as compared to PSO.