scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Signal and Information Processing in 2015"


Journal ArticleDOI
TL;DR: A CS based approach has superior convergence and fitness values compared to PSO as the CS converge faster that proves the efficacy of the CS based technique and Image Quality Analysis (IQA) justifies the robustness of the proposed enhancement technique.
Abstract: Medical image enhancement is an essential process for superior disease diagnosis as well as for detection of pathological lesion accurately. Computed Tomography (CT) is considered a vital medical imaging modality to evaluate numerous diseases such as tumors and vascular lesions. However, speckle noise corrupts the CT images and makes the clinical data analysis ambiguous. Therefore, for accurate diagnosis, medical image enhancement is a must for noise removal and sharp/clear images. In this work, a medical image enhancement algorithm has been proposed using log transform in an optimization framework. In order to achieve optimization, a well-known meta-heuristic algorithm, namely: Cuckoo search (CS) algorithm is used to determine the optimal parameter settings for log transform. The performance of the proposed technique is studied on a low contrast CT image dataset. Besides this, the results clearly show that the CS based approach has superior convergence and fitness values compared to PSO as the CS converge faster that proves the efficacy of the CS based technique. Finally, Image Quality Analysis (IQA) justifies the robustness of the proposed enhancement technique.

75 citations


Journal ArticleDOI
TL;DR: The proposed neural network study is based on solutions of speech recognition tasks, detecting signals using angular modulation and detection of modulated techniques.
Abstract: Speech recognition or speech to text includes capturing and digitizing the sound waves, transformation of basic linguistic units or phonemes, constructing words from phonemes and contextually analyzing the words to ensure the correct spelling of words that sounds the same. Approach: Studying the possibility of designing a software system using one of the techniques of artificial intelligence applications neuron networks where this system is able to distinguish the sound signals and neural networks of irregular users. Fixed weights are trained on those forms first and then the system gives the output match for each of these formats and high speed. The proposed neural network study is based on solutions of speech recognition tasks, detecting signals using angular modulation and detection of modulated techniques.

23 citations


Journal ArticleDOI
TL;DR: This paper proposes a 1-D template matching algorithm which is an alternative for 2-D full search block matching algorithms and is robust to detect the target object with changes of illumination in the template also when the Gaussian noise added to the source image.
Abstract: Template matching is a fundamental problem in pattern recognition, which has wide applications, especially in industrial inspection. In this paper, we propose a 1-D template matching algorithm which is an alternative for 2-D full search block matching algorithms. Our approach consists of three steps. In the first step the images are converted from 2-D into 1-D by summing up the intensity values of the image in two directions horizontal and vertical. In the second step, the template matching is performed among 1-D vectors using the similarity function sum of square difference. Finally, the decision will be taken based on the value of similarity function. Transformation template image and sub-images in the source image from 2-D grey level information into 1-D information vector reduce the dimensionality of the data and accelerate the computations. Experimental results show that the computational time of the proposed approach is faster and performance is better than three basic template matching methods. Moreover, our approach is robust to detect the target object with changes of illumination in the template also when the Gaussian noise added to the source image.

18 citations


Journal ArticleDOI
TL;DR: The results confirm that while the proposed DPCM-DWT-Huffman approach enhances the CR, it does not deteriorate other performance quantitative measures in comparison with the DWT-Huffleman, the D PCM-H Huffman and the Huffman algorisms.
Abstract: This paper presents a medical image compression approach. In this approach, first the image is preprocessed by Differential Pulse Code Modulator (DPCM), second, the output of the DPCM is wavelet transformed, and finally the Huffman encoding is applied to the resulting coefficients. Therefore, this approach provides theoretically threefold compression. Simulation results are presented to compare the performance of the proposed (DPCM-DWT-Huffman) approach with the performances of the Huffman incorporating DPCM (DPCM-Huffman), the DWT-Huffman and the Huffman encoding alone. Several quantitative indexes are computed to measure the performance of the four algorisms. The results show that the DPCM-DWT-Huffman, the DWT-Huffman, the DPCM-Huffman and the Huffman algorisms provide compression ratio (CR) of 6.4837, 4.32, 2.2751 and 1.235, respectively. The results also confirm that while the proposed DPCM-DWT-Huffman approach enhances the CR, it does not deteriorate other performance quantitative measures in comparison with the DWT-Huffman, the DPCM-Huffman and the Huffman algorisms.

17 citations


Journal ArticleDOI
TL;DR: Spectral Power Densities within the Quantitative Electroencephalographic Profiles of 41 men and women displayed repeated transient coherence with the first three modes of the Schumann Resonance in real time, consistent with the congruence of the frequency, magnetic field intensity, voltage gradient, and phase shifts that are shared by the human brain and the earth-ionospheric spherical wave guide.
Abstract: Spectral Power Densities (SPD) within the Quantitative Electroencephalographic (QEEGs) Profiles of 41 men and women displayed repeated transient coherence with the first three modes (7 - 8 Hz, 13 - 14 Hz, and 19 - 20 Hz) of the Schumann Resonance in real time. The duration of the coherence was about 300 ms about twice per min. Topographical map clusters indicated that the domain of maximum coherence was within the right caudal hemisphere near the Parahippocampal gyrus. These clusters, associated with shifts of about 2 μV, became stable about 35 to 45 ms after the onset of the synchronizing event. During the first 10 to 20 ms, the isoelectric lines shifted from clockwise to counterclockwise rotation. The results are consistent with the congruence of the frequency, magnetic field intensity, voltage gradient, and phase shifts that are shared by the human brain and the earth-ionospheric spherical wave guide. Calculations indicated that under certain conditions interactive information processing might occur for brief periods. Natural and technology-based variables affecting the Schumann parameters might be reflected in human brain activity, including modifications of cognition and dream-related memory consolidation.

14 citations


Journal ArticleDOI
TL;DR: This paper aims at finding the most appropriate nonparametric FFT-based spectral estimation technique to estimate reliable dominant frequency of atrial fibrillation (Afib) via Bartlett using Hanning window, and Welch methods.
Abstract: Atrial fibrillation (Afib) is related with heart failure, stroke, and high mortality rates. In frequency domain analysis, pre-requisite for Afib detection has been the estimation of reliable dominant frequency (DF) of atrial signals via different spectral estimation techniques. DF further characterizes Afib, and helps in its treatment. This paper aims at finding the most appropriate nonparametric FFT-based spectral estimation technique to estimate reliable DF for Afib detection. In this work, real-time intra-atrial electrograms have been acquired and pre-processed for frequency analysis. DF is estimated via Bartlett using Hanning window, and Welch methods. Regularity index (RI), a parameter to ensure reliability of DF, is calculated using Simpson 3/8 and Trapezoidal rules. The best method is declared based upon high accuracy of Afib detection using reliable DF. On comparison, Welch method is found to be more appropriate to estimate reliable DF for Afib detection with 98% accuracy.

8 citations


Journal ArticleDOI
TL;DR: The image compression performance of ITT and ICT, using both variable and fixed quantization, is compared with a variety of images and the cases suitable for ITT-based image compression employing variable quantization are identified.
Abstract: In the field of image and data compression, there are always new approaches being tried and tested to improve the quality of the reconstructed image and to reduce the computational complexity of the algorithm employed. However, there is no one perfect technique that can offer both maximum compression possible and best reconstruction quality, for any type of image. Depending on the level of compression desired and characteristics of the input image, a suitable choice must be made from the options available. For example in the field of video compression, the integer adaptation of discrete cosine transform (DCT) with fixed quantization is widely used in view of its ease of computation and adequate performance. There exist transforms like, discrete Tchebichef transform (DTT), which are suitable too, but are potentially unexploited. This work aims to bridge this gap and examine cases where DTT could be an alternative compression transform to DCT based on various image quality parameters. A multiplier-free fast implementation of integer DTT (ITT) of size 8 × 8 is also studied, for its low computational complexity. Due to the uneven spread of data across images, some areas might have intricate detail, whereas others might be rather plain. This prompts the use of a compression method that can be adapted according to the amount of detail. So, instead of fixed quantization this paper employs quantization that varies depending on the characteristics of the image block. This implementation is free from additional computational or transmission overhead. The image compression performance of ITT and ICT, using both variable and fixed quantization, is compared with a variety of images and the cases suitable for ITT-based image compression employing variable quantization are identified.

8 citations


Journal ArticleDOI
TL;DR: Calculations of the energy available per cell and per volume of the quantity of reactants injected into the local space from the intensity of the changing velocity toroidal magnetic field support previous measurements and derivations that the units of information transposition may involve discrete quantities that represent equivalents of photons, electrons and protons.
Abstract: In multiple experiments plates of melanoma cells separated by either 3 m or 1.7 km were placed in the centers of toroids. A specific protocol of changing, angular velocity, pulsed magnetic fields that has been shown to produce excess correlation in photon durations and shift in proton concentrations (pH) in spring water were generated around both plates of cells. Serial injections of 50 μL of standard concentrations of hydrogen peroxide into the “local” plates of cells during the 12 min of field activation produced conspicuous cell death (reduction of viable cells by about 50%) with comparable diminishments of cell numbers in the non-local plates of cells within 24 hr but only if both loci separated by either 3 m or 1.7 km had shared the “excess correlation” magnetic field sequence. The non-local effect did not occur if the magnetic fields had not been present. Higher or lower concentrations of peroxide or concentrations that eliminated all of the cells or very few cells in the local dishes were associated with no significant diminishment of non-local cell growth. The data indicate that there must be a critical number of cells remaining viable following the local chemical reaction for the excess correlation to be manifested in the non-local cells. We suggest that this specific spatial-temporal pattern of fields generated within the paired toroidal geometries promotes transposition of virtual chemical reactions as an information field. Calculations of the energy available per cell and per volume of the quantity of reactants injected into the local space from the intensity of the changing velocity toroidal magnetic field support previous measurements and derivations that the units of information transposition may involve discrete quantities that represent equivalents of photons, electrons and protons.

6 citations


Journal ArticleDOI
TL;DR: This research is the first to propose the usage of the Unscented Kalman Filters in the optimization of the Bluetooth System receivers in the presence of additive white Gaussian noise (AWGN), as well as interferences.
Abstract: This paper presents a novel and cost effective method to be used in the optimization of the Gaussian Frequency Shift Keying (GFSK) at the receiver of the Bluetooth communication system. The proposed method enhances the performance of the noncoherent demodulation schemes by improving the Bit Error Rate (BER) and Frame Error Rate (FER) outcomes. Linear, Extended, and Unscented Kalman Filters are utilized in this technique. A simulation model, using Simulink, has been created to simulate the Bluetooth voice transmission system with the integrated filters. Results have shown improvements in the BER and FER, and that the Unscented Kalman Filters (UKF) have shown superior performance in comparison to the linear Kalman Filter (KF) and the Extended Kalman Filter (EKF). To the best of our knowledge, this research is the first to propose the usage of the UKF in the optimization of the Bluetooth System receivers in the presence of additive white Gaussian noise (AWGN), as well as interferences.

5 citations


Journal ArticleDOI
TL;DR: An efficient P wave detection method in electrocardiogram (ECG) using the local entropy criterion (EC) and wavelet transform (WT) modulus maxima and validated using ECG-recordings with a wide variety of P-wave morphologies from MIT-BIH Arrhythmia and QT database.
Abstract: The objective of this paper is to develop an efficient P wave detection method in electrocardiogram (ECG) using the local entropy criterion (EC) and wavelet transform (WT) modulus maxima The detection of P wave relates to the diagnosis of many heart diseases and it is also a difficult point during the ECG signal detection Determining the position of a P-wave is complicated due to the low amplitude, the ambiguous and changing form of the complex In a first step, QRS complexes are detected using the pan-Tompkins method Then, we look for the best position of the analysis window and the value of the most appropriate width to the P wave Finally, the determination of P wave peaks, as well as their onsets and offsets The method has been validated using ECG-recordings with a wide variety of P-wave morphologies from MIT-BIH Arrhythmia and QT database The P-wave method obtains a sensitivity of 9987% and a positive predictivity of 9804% over the MIT-BIH Arrhythmia, while for the QT, sensitivity and predictivity over 998% are attained

5 citations


Journal ArticleDOI
TL;DR: In this paper, a new time-dependent model for solving total variation (TV) minimization problem in image denoising is proposed, which is a constrained optimization type of numerical algorithm for removing noise from images.
Abstract: In this paper, we propose a new time dependent model for solving total variation (TV) minimization problem in image denoising. The main idea is to apply a priori smoothness on the solution image. This is a constrained optimization type of numerical algorithm for removing noise from images. The constraints are imposed using Lagrange’s multipliers and the solution is obtained using the gradient projection method. 1D and 2D numerical experimental results by explicit numerical schemes are discussed.

Journal ArticleDOI
TL;DR: A hybrid disparity generation algorithm which uses census based and segmentation based approaches to solve standard problems like occlusions, repetitive patterns, textureless regions, perspective distortion, specular reflection and noise.
Abstract: Disparity estimation is an ill-posed problem in computer vision. It is explored comprehensively due to its usefulness in many areas like 3D scene reconstruction, robot navigation, parts inspection, virtual reality and image-based rendering. In this paper, we propose a hybrid disparity generation algorithm which uses census based and segmentation based approaches. Census transform does not give good results in textureless areas, but is suitable for highly textured regions. While segment based stereo matching techniques gives good result in textureless regions. Coarse disparities obtained from census transform are combined with the region information extracted by mean shift segmentation method, so that a region matching can be applied by using affine transformation. Affine transformation is used to remove noise from each segment. Mean shift segmentation technique creates more than one segment of same object resulting into non-smoothness disparity. Region merging is applied to obtain refined smooth disparity map. Finally, multilateral filtering is applied on the disparity map estimated to preserve the information and to smooth the disparity map. The proposed algorithm generates good results compared to the classic census transform. Our proposed algorithm solves standard problems like occlusions, repetitive patterns, textureless regions, perspective distortion, specular reflection and noise. Experiments are performed on middlebury stereo test bed and the results demonstrate that the proposed algorithm achieves high accuracy, efficiency and robustness.

Journal ArticleDOI
TL;DR: In this paper, a parametric modeling of speech contaminated by additive white Gaussian noise (AWGN), assuming that the noise variance can be estimated, is shown that by combining a suitable noise variance estimator with an efficient iterative scheme, a significant improvement in modelling performance can be achieved.
Abstract: In estimating the linear prediction coefficients for an autoregressive spectral model, the concept of using the Yule-Walker equations is often invoked. In case of additive white Gaussian noise (AWGN), a typical parameter compensation method involves using a minimal set of Yule-Walker equation evaluations and removing a noise variance estimate from the principal diagonal of the autocorrelation matrix. Due to a potential over-subtraction of the noise variance, however, this method may not retain the symmetric Toeplitz structure of the autocorrelation matrix and thereby may not guarantee a positive-definite matrix estimate. As a result, a significant decrease in estimation performance may occur. To counteract this problem, a parametric modelling of speech contaminated by AWGN, assuming that the noise variance can be estimated, is herein presented. It is shown that by combining a suitable noise variance estimator with an efficient iterative scheme, a significant improvement in modelling performance can be achieved. The noise variance is estimated from the least squares analysis of an overdetermined set of p lower-order Yule-Walker equations. Simulation results indicate that the proposed method provides better parameter estimates in comparison to the standard Least Mean Squares (LMS) technique which uses a minimal set of evaluations for determining the spectral parameters.

Journal ArticleDOI
TL;DR: The receiver employs suppression filter (SF) to mitigate the effect of narrow-band jammer interference and diversity techniques to reduce multiple access interference and the performance of the system is compared with the performance Sinusoidal (Sin) based MC/MCD CDMA system.
Abstract: In Wavelet Packets Based Multicarrier Multicode CDMA system, the multicode (MCD) part ensures the transmission for high speed and flexible data rate, the multicarrier (MC) part ensures the flexibility of handling multiple data rates, and wavelet packets modulation technique contributes to the mitigation of the interference problems. The CDMA system can suppress a given amount of interference. In this paper, the receiver employs suppression filter (SF) to mitigate the effect of narrow-band jammer interference and diversity techniques to reduce multiple access interference. The framework for the system and the performance evaluation are presented in terms of bit error rate (BER) over a Nakagami fading channel. Also, we investigate how the performance is influenced by various parameters, such as the number of taps of the SF, the ratio of narrow-band interference bandwidth to the spread-spectrum bandwidth, the diversity order, the fading parameter and so on. Finally, the performance of the system is compared with the performance Sinusoidal (Sin) based MC/MCD CDMA system.

Journal ArticleDOI
TL;DR: In this article, the generalized uncertainty principle of LCT for concentrated data in limited supports was investigated and the discrete generalized uncertainty relation, whose bounds are related to LCT parameters and data lengths, was derived in theory.
Abstract: Linear canonical transform (LCT) is widely used in physical optics, mathematics and information processing. This paper investigates the generalized uncertainty principles, which plays an important role in physics, of LCT for concentrated data in limited supports. The discrete generalized uncertainty relation, whose bounds are related to LCT parameters and data lengths, is derived in theory. The uncertainty principle discloses that the data in LCT domains may have much higher concentration than that in traditional domains.

Journal ArticleDOI
TL;DR: The analysis of the performance of the proposed modulator on above mentioned number systems indicates the superiority of other number systems over binary number system.
Abstract: This paper presents a comparative study of the performances of arithmetic units, based on different number systems like Residue Number System (RNS), Double Base Number System (DBNS), Triple Base Number System (TBNS) and Mixed Number System (MNS) for DSP applications. The performance analysis is carried out in terms of the hardware utilization, timing complexity and efficiency. The arithmetic units based on these number systems were employed in designing various modulation schemes like Binary Frequency Shift Keying (BFSK) modulator/demodulator. The analysis of the performance of the proposed modulator on above mentioned number systems indicates the superiority of other number systems over binary number system.

Journal ArticleDOI
TL;DR: In this article, a closed-form expression for the difference in the residual ISI obtained by blind adaptive equalizers with biased input signals compared with the non-biased case is presented.
Abstract: Recently, closed-form approximated expressions were obtained for the residual Inter Symbol Interference (ISI) obtained by blind adaptive equalizers for the biased as well as for the non-biased input case in a noisy environment. But, up to now it is unclear under what condition improved equalization performance is obtained in the residual ISI point of view with the non-biased case compared with the biased version. In this paper, we present for the real and two independent quadrature carrier case a closed-form approximated expression for the difference in the residual ISI obtained by blind adaptive equalizers with biased input signals compared with the non-biased case. Based on this expression, we show under what condition improved equalization performance is obtained from the residual ISI point of view for the non-biased case compared with the biased version.

Journal ArticleDOI
Monika Pinchas1
TL;DR: In this paper, a closed-form expression for the deviation from the mean of the arithmetic average (sample mean) of the real part of consecutive convolutional noises is derived for a pre-given probability that these events may occur.
Abstract: Due to non-ideal coefficients of the adaptive equalizer used in the system, a convolutional noise arises at the output of the deconvolutional process in addition to the source input. A higher convolutional noise may make the recovering process of the source signal more difficult or in other cases even impossible. In this paper we deal with the fluctuations of the arithmetic average (sample mean) of the real part of consecutive convolutional noises which deviate from the mean of order higher than the typical fluctuations. Typical fluctuations are those fluctuations that fluctuate near the mean, while the other fluctuations that deviate from the mean of order higher than the typical ones are considered as rare events. Via the large deviation theory, we obtain a closed-form approximated expression for the amount of deviation from the mean of those fluctuations considered as rare events as a function of the system’s parameters (step-size parameter, equalizer’s tap length, SNR, input signal statistics, characteristics of the chosen equalizer and channel power), for a pre-given probability that these events may occur.

Journal ArticleDOI
TL;DR: This paper presents a design of a data processing circuit for receiving digital signals from front end-electronic board chips of a specific nuclear detector, encoding and triggering them via specific optical links operating at a specific frequency.
Abstract: This paper presents a design of a data processing circuit for receiving digital signals from front end-electronic board chips of a specific nuclear detector, encoding and triggering them via specific optical links operating at a specific frequency. Such processed signals are then fed to a data acquisition system (DAQ) for analysis. Very high-speed integrated circuit hardware description language (VHDL) algorithms and codes were created to implement this design using field programmable gate array (FPGA) devices. The obtained data were simulated using international standard simulators.

Journal ArticleDOI
TL;DR: An audio digital signal-processing toolkit that the authors develop to supplement a lecture course on digital signal processing taught at the department of Electrical and Electronics Engineering at the University of Rwanda is described.
Abstract: This paper describes an audio digital signal-processing toolkit that the authors develop to supplement a lecture course on digital signal processing (DSP) taught at the department of Electrical and Electronics Engineering at the University of Rwanda. In engineering education, laboratory work is a very important component for a holistic learning experience. However, even though today there is an increasing availability of programmable DSP hardware that students can largely benefit from, many poorly endowed universities cannot afford a costly full-fledged DSP laboratory. To help remedy this problem, the authors have developed C#.NET toolkits, which can be used for real-time digital audio signal processing laboratory. These toolkits can be used with any managed languages, like Visual Basic, C#, F# and managed C++. They provide frequently used modules for digital audio processing such as filtering, equalization, spectrum analysis, audio playback, and sound effects. It is anticipated that by creating a flexible and reusable components, students will not only learn fundamentals of DSP but also get an insight into the practicability of what they have learned in the classroom.

Journal ArticleDOI
TL;DR: This paper proposes an original method that will extract handwritings from two types of forms; bank and administrative form and a Fourier-Melin transform was used to re-orient the forms correctly.
Abstract: Filling forms is one of the most useful and powerful ways to collect information from people in business, education and many other domains. Nowadays, almost everything is computerized. That creates a curtail need for extracting these handwritings from the forms in order to get them into the computer systems and databases. In this paper, we propose an original method that will extract handwritings from two types of forms; bank and administrative form. Our system will take as input any of the two forms already filled. And according to some statistical measures our system will identify the form. The second step is to subtract the filled form from a previously inserted empty form. In order to make the acting easier and faster a Fourier-Melin transform was used to re-orient the forms correctly. This method has been evaluated with 50 handwriting forms (from both types Bank and University) and the results were approximatively 90%.

Journal ArticleDOI
TL;DR: In this article, the generalized uncertainty principles of fractional Fourier transform (FRFT) for concentrated data in limited supports were derived in theory, whose bounds are related to FRFT parameters and signal lengths.
Abstract: This paper investigates the generalized uncertainty principles of fractional Fourier transform (FRFT) for concentrated data in limited supports. The continuous and discrete generalized uncertainty relations, whose bounds are related to FRFT parameters and signal lengths, were derived in theory. These uncertainty principles disclose that the data in FRFT domains may have much higher concentration than that in traditional time-frequency domains, which will enrich the ensemble of generalized uncertainty principles.

Journal ArticleDOI
TL;DR: A new adaptive control method used to adjust the output voltage and current of a DC-DC (DC: Direct Current) power converter under different sudden changes in load is presented.
Abstract: The purpose of this paper is to present a new adaptive control method used to adjust the output voltage and current of a DC-DC (DC: Direct Current) power converter under different sudden changes in load. The controller used is a PID controller (Proportional, Integrator, and Differentiator). The gains of the PID controller (KP, KI and KD) tuned using Simulated Annealing (SA) algorithm which is part of Generic Probabilistic Metaheuristic family. The new control system is expected to have a fast transient response feature, with less undershoot of the output voltage and less overshoot of the reactor current. Pulse Width Modulation (PWM) will be utilized to switch the power electronic devices.

Journal ArticleDOI
TL;DR: This paper is proposing a context aware pattern methodology to filter relevant transaction data based on the preference of business to convert invisible, unstructured and time-sensitive machine data into information for decision making.
Abstract: To convert invisible, unstructured and time-sensitive machine data into information for decision making is a challenge. Tools available today handle only structured data. All the transaction data are getting captured without understanding its future relevance and usage. It leads to other big data analytics related issue in storing, archiving, processing, not bringing in relevant business insights to the business user. In this paper, we are proposing a context aware pattern methodology to filter relevant transaction data based on the preference of business.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the use of sharp function as an edge detector through well known diffusion models and discussed the formulation of weak solution of nonlinear diffusion equation and prove uniqueness of weak solutions of non-linear problem.
Abstract: Ahmad et al. in their paper [1] for the first time proposed to apply sharp function for classification of images. In continuation of their work, in this paper we investigate the use of sharp function as an edge detector through well known diffusion models. Further, we discuss the formulation of weak solution of nonlinear diffusion equation and prove uniqueness of weak solution of nonlinear problem. The anisotropic generalization of sharp operator based diffusion has also been implemented and tested on various types of images.