Showing papers in "WSEAS Transactions on Signal Processing archive in 2018"
•
TL;DR: In this article, a collective review of denoising techniques that can be applied to a cognitive radio system during all the phases of cognitive communication and discusses several works where the techniques are employed.
Abstract: One of the fundamental challenges affecting the performance of communication systems is the
undesired impact of noise on a signal. Noise distorts the signal and originates due to several sources including,
system non-linearity and noise interference from adjacent environment. Conventional communication systems
use filters to cancel noise in a received signal. In the case of cognitive radio systems, denoising a signal is
important during the spectrum sensing period, and also during communication with other network nodes. Based
on our findings, few surveys are found that only review particular denoising techniques employed for the
spectrum sensing phase of cognitive radio communication. This paper aims to provide a collective review of
denoising techniques that can be applied to a cognitive radio system during all the phases of cognitive
communication and discusses several works where the denoising techniques are employed. To establish
comprehensive overview, a performance comparison of the discussed denoising techniques are also provided.
4 citations
•
TL;DR: On the basis of the proposed method of effective extraction of image cells for realization of steganographic protection of information, a method for constructing a container is proposed as represented by a graphic file.
Abstract: The method of effective extraction of image cells for realization of steganographic protection
of information is proposed in the paper. On the basis of the proposed method, a method for constructing a
container is proposed as represented by a graphic file. The main units of the steganographic system are
developed on the basis of the proposed method of a graphic container constructing. To increase the
number of the extracted cells, noise is added to the image. Noise cells are also used to embed message
bits. The schemes and VHDL models of the cell extraction unit are developed.
3 citations
•
TL;DR: This paper proposes an accurate method to identify outliers in HRV measurements with partial epilepsy retaining relevant information using the p-shift unbiased finite impulse response (UFIR) smoothing filter operating on optimal horizons.
Abstract: Heart rate variability (HRV) is typically associated with neuroautonomic activity and viewed as a major
non–invasive tool to detect seizures. The HRV has been assumed and analyzed as a stationary signal. However the
presence of seizures can violate estimates of statistical parameters and conventional techniques intended to remove
outliers can be inaccurate. A useful approach implies setting thresholds to compute the first and third quartiles
from histogram data or residuals based on the estimated baseline. In this paper, we propose an accurate method to
identify outliers in HRV measurements with partial epilepsy retaining relevant information. The baseline perturbed
by the seizure in the HRV data is removed using the p-shift unbiased finite impulse response (UFIR) smoothing
filter operating on optimal horizons. The residuals histogram is plotted and the upper bound (UB) and lower bound
(LB) are computed as thresholds. A comparison is provided of a typical points detected in HRV/seizures based
on several methods used to estimate the baseline. A time/frequency analysis is supplied to show the difference
between the raw HRV and HRV without outliers. The method proposed is tested by partial seizures records taken
from patients during continuous EEG/ECG and video monitoring.
2 citations
•
TL;DR: This paper applies segmentation techniques to find important feature like cell count and this feature is further used to differentiate between normal and abnormal cells to diagnose patient early.
Abstract: Cytopathology is a branch which deals with study diseases at the microscopic cell level. Now a day,
deadly disease like cancer has become a major challenge in the field of medical research. If these diseases are
identified in the early stages, then they are curable. There are many methods to ease, faster and produce accurate
results of this analysis. One such approach is using image processing techniques. This paper applies segmentation
techniques to find important feature like cell count. This feature is further used to differentiate between normal
and abnormal cells. Abnormal cells are the one which may turn into cancerous cells. Hence the early detection of
such abnormal cells before they turn into cancerous cells helps to diagnose patient early. The proposed
method was evaluated using cervical cells.
1 citations
•
TL;DR: In this article, the authors proposed a new method for optimal design of minimum-length, minimum-phase, low-group-delay FIRFilter by employing convex optimization, discrete signal processing (DSP), and polynomial stabilization techniques.
Abstract: This paper proposes a new method for optimal design of minimum-length, minimum-phase, low-group-delay FIR
filter by employing convex optimization, discrete signal processing (DSP), and polynomial stabilization techniques. The design of
a length-N FIR filter is formulated as a convex second-order cone programming (SOCP). In order to design a minimum-phase FIR
filter as the necessary condition for having low group delay, the algorithm guarantees that all the filter’s zeros are inside the unit
circle (minimum-phase). In addition, the quasiconvex optimization problem is developed to minimize the length of minimumphase, low-group-delay FIR filter. To this end, for a typical low-pass FIR filter, the length of the filter is minimized such that the
optimum magnitude response is satisfied, the minimum-phase characteristic is maintained, and the low-group-delay is achieved.
The proposed design algorithm only relies on one parameter (cut-off frequency) and the rest of filter parameters are automatically
optimized as the trade-off between having minimum-length, minimum-phase, maximum stopband attenuation and low group delay.
The effectiveness and performance of proposed approach is demonstrated and compared with other approaches over a set of
examples. It is illustrated that this approach converges to the optimal solution in a few iterations.
1 citations
•
TL;DR: Experimental results show that the proposed haze video enhancement algorithm can effectively improve the contrast and sharpness of video while with highly computational efficiency.
Abstract: In order to improve the low visibility and poor contrast of video, in this study, we propose a haze
video enhancement algorithm based on guide filtering method. In the paper, firstly, we simplified the
atmospheric attenuation model. Then, the current concentration of haze is estimated based on dark channel
priori theory. After that, a guide filter of brightness channel was used to obtain current haze coverage. Videos
are recovered based on the results of estimation of haze concentration and coverage. To improve the efficiency
of the algorithm, highly time-consuming stage of haze concentration is implemented in initialization phase, and
an indicator of video definition is used to control the procedure of subsequent module. Experimental results
show that the algorithm can effectively improve the contrast and sharpness of video while with highly
computational efficiency. The proposed method can meet the needs of video haze removing
1 citations
•
TL;DR: The proposed method archives good Peak Signal to Noise Ratio and less Mean Square Error and higher Compression Ratio when wavelet threshold and Uniform Quantization apply on Arithmetic Coder.
Abstract: The Image denoising is one of the challenges in medical image compression field. The Discrete
Wavelet Transform and Wavelet Thresholding is a popular tool to denoising the image. The Discrete Wavelet
Transform uses multiresolution technique where different frequency are analyzed with different resolution. In
this proposed work we focus on finding the best wavelet type by applying initially three level decomposition on
noise image. Then irrespective to noise type, in second stage, to estimate the threshold value the hard
thresholding and universal threshold approach are applied and to determine best threshold value. Lastly
Arithmetic Coding is adopted to encode medical image. The simulation work is used to calculate Percentage of
Non – Zero Value (PCDZ) of wavelet coefficient for different wavelet types. The proposed method archives
good Peak Signal to Noise Ratio and less Mean Square Error and higher Compression Ratio when wavelet
threshold and Uniform Quantization apply on Arithmetic Coder
1 citations
•
TL;DR: In this paper, the authors proposed a general purpose architecture for video applications which satisfies parallel access to the memory with different bandwidth requirements, which is based on the Multi Port Memory Controller MPMC.
Abstract: Today, a significant number of embedded systems focus on multimedia applications with almost
insatiable demand for low-cost and high performance. Generally, the majority of video applications need to
execute parallel tasks with simultaneous access to the memory. One fact is that these parallel tasks have
different bandwidth requirements that have to be satisfied separately when granting access to the memory. In
this paper, we propose a general purpose architecture for video applications which satisfies parallel access to
the memory with different bandwidth requirements. The proposed solution is based on the Multi Port Memory
Controller MPMC. The management of memory accesses is assured by using the BGPQ algorithm to guarantee
QoS requirements. We demonstrate the important role of this solution in multi-video applications when
multiple bandwidths are required. In fact, to successful the deployment of DRAM, it is mandatory to use a
flexible and scalable interface with the appropriate arbitration algorithm. The proposed architecture is
implemented using the Xilinx Virtex-5 FPGA and its available resources like embedded memory, DCM’s and
others. It also introduces diverse modules such as video zoom-in and out. This provides the utility of using this
architecture as a universal video processing platform according to different applications requirement.
1 citations
•
TL;DR: A signal to signal ratio (SSR) independent method to detect speaker identities from a cochannel speech signal with unique speaker specific features for speaker identification with proposed Kekre’s Transform Cepstral Coefficient (KTCC) features.
Abstract: Supervised speech segregation for cochannel speech signal can be made easier if we use
predetermined speaker’s models instead of taking models for all the population. Here we propose a signal to
signal ratio (SSR) independent method to detect speaker identities from a cochannel speech signal with unique
speaker specific features for speaker identification. Proposed Kekre’s Transform Cepstral Coefficient (KTCC)
features are the robust acoustic features for speaker identification. A text independent speaker identification
system is utilized for identifying speakers in short segments of test signal. Gaussian mixture modeling (GMM)
classifier is used for the identification task. We compare the proposed method with a system utilizing
conventional features called Mel Frequency Cepstral Coefficient (MFCC) features. Spontaneous speech
utterances from candidates are taken for experimentation instead of utterances that follow a command like
structure with a unique grammatical structure and have a limited word list in speech separation challenge (SSC)
corpus. Identification is performed on short segments of the cochannel mixture. Two Speakers who have been
identified for most of segments of the cochannel mixture are selected as two speakers detected for the same
cochannel mixture. Average speaker detection accuracy of 93.56% is achieved in case of two speaker
cochannel mixture for of KTCC features. This method produces best results for cochannel speaker
identification even being text independent. Speaker identification performance is also checked for various test
segment lengths. KTCC features outperform in speaker identification task even the length of speech segment is
very short.
•
TL;DR: This study proposes a method to due with the uncertainties due to the dynamics of the targets, e.g., when the number of moving targets is unknown and changing over time, by adding a decay constant to the estimated prior target information.
Abstract: An adaptive waveform is optimized in order to maximize the information returned from the targets,
then the targets informaion is approximated by using a particle filter. This study propose a method to due with
the uncertainties due to the dynamics of the targets, e.g., when the number of moving targets is unknown and
changing over time. Thus, A decay constant is added to the estimated prior target information before optimizing
the waveform by minimizing Cramer-Rao Lower Bound. Jeffreys prior is used to weight the parameters of each ´
targets. Furthermore, the dynamic state space of the targets is estimated by a particle filter. Finally, the simulation
results demonstrate the capability of the system to track targets.
•
TL;DR: The graphical representation demonstrated that the proposed method achieves better performance in terms of the SNR reconstruction and the probability of successfully recovered signal, and also outperforms several other methods.
Abstract: As a powerful high resolution image modeling technique, compressive sensing (CS) has been
successfully applied in digital image processing and various image applications. This paper proposes a new
method of efficient image reconstruction based on the Modified Frame Reconstruction Iterative Thresholding
Algorithm (MFR ITA) developed under the compressed sensing (CS) domain by using total variation algorithm.
The new framework is consisted of three phases. Firstly, the input images are processed by the multilook
processing with their sparse coefficients using the Discrete Wavelet Transform (DWT) method. Secondly, the
measurements are obtained from sparse coefficient by using the proposed fusion method to achieve the balance
resolution of the pixels. Finally, the fast CS method based on the MFR ITA is proposed to reconstruct the high
resolution image. In addition, the proposed method achieves good PNSR and SSIM values, and has shown faster
convergence rate when performed the MFR ITA under the CS domain. Furthermore the graphical representation
demonstrated that the proposed method achieves better performance in terms of the SNR reconstruction and the
probability of successfully recovered signal, and also outperforms several other methods.
•
TL;DR: It is shown that the abnormality in the atrial premature complex (APC) is related to the P-wave morphology and it is demonstrated that UFIR smoothing provides better performance among others.
Abstract: Heart diseases are one of most frequent causes of death in the modern world. Therefore, the ECG
signal features have been under peer review for decades to improve medical diagnostics. In this paper, we provide
smoothing of the atrial premature complex (APC) of the electrocardiogram (ECG) signal using unbiased finite
impulse response (UFIR) smoothing filtering. We investigate the P-wave distribution using the Rice law and
determine the probabilistic confidence interval based on a database associated with normal heartbeats. It is shown
that the abnormality in the APC is related to the P-wave morphology. Different filtering techniques employing
predictive and smoothing filtering are applied to APC data and compared experimentally. It is demonstrated that
UFIR smoothing provides better performance among others. We finally show that the P-wave confidence interval
defined for the Rice distribution can be used to provide an automatic diagnosis with a given probability.
•
TL;DR: This study proposes a method for controlling a 3D entity shown on a transparent display by distinguishing between the hand and object using a single depth camera and by extracting relevant information for each through a database built based on moment values converted from projected images.
Abstract: This study proposes a method for controlling a 3D entity shown on a transparent display by
distinguishing between the hand and object using a single depth camera and by extracting relevant information
for each. The hardware configuration for controlling a 3D entity has been presented. To enable the control of a
3D entity, a target area where the distinction between the hand and the object is made is extracted in the
preprocessing stage. The extracted target area images are normalized to an identical size, and projected onto
Zernike moment basis functions. The database is built based on moment values converted from projected
images to distinguish the hand from object when input images are presented on a real time basis. This study
also presents a method for interacting with a 3D entity using hand and object. To validate the performance of
the system, an evaluation of recognition rate and time was performed.
•
TL;DR: An adaptive method for analysis of sparse signals using bandpass filters obtained by modulated Slepian sequences is introduced which decomposes a signal into different modes which corresponds to segmenting the Fourier spectrum and filtering the existing support.
Abstract: We introduce an adaptive method for analysis of sparse signals using bandpass filters obtained by modulated
Slepian sequences. Similar to the recently introduced empirical wavelet transform, the proposed method
decomposes a signal into different modes which corresponds to segmenting the Fourier spectrum and filtering the
existing support. The simulations illustrate the correct signal decomposition for a multiband signal which has a
sparse spectrum. The proposed method can be used as an alternative to empirical wavelet transform
•
TL;DR: In this article, the authors explained how could the various stated techniques and operations will be useful in the detection of the defects for the optical fiber cables and their connectors and most of optical devices to be more effective in Optical fiber based communication systems.
Abstract: Image enhancement is a process to output an image which is more suitable and useful than
original image for specific application. Thermal image enhancement includes many techniques used in
Quality Control, Problem Diagnostics, and Insurance Risk Assessment. Various enhancement schemes are
used for enhancing an image which includes gray scale manipulation, filtering and Histogram Equalization
(HE), Fast Fourier Transform which results in Highlighting interesting detail in images, removing noise from
images, making images more visually appealing, edge enhancement and increase the contrast of the image.
This research article explains how could the various stated techniques and operations will be useful in the
detection of the defects for the optical fiber cables and their connectors and most of optical devices to be
more effective in Optical fiber based communication systems
•
TL;DR: This work proposes and implements a method based on Context-Aware Visual Attention Model, but modifying the method in such way that the detection algorithm is replaced by Histograms of Oriented Gradients (HOG), and shows that CAVAM model can be adapted to other methods for object detection besides Scale-Invariant Feature Transform (SIFT), as it was originally proposed.
Abstract: This work proposes and implements a method based on Context-Aware Visual Attention Model
(CAVAM), but modifying the method in such way that the detection algorithm is replaced by Histograms of Oriented
Gradients (HOG). After reviewing different algorithms for people detection, we select HOG method because
it is a very well known algorithm, which is used as a reference in virtually all current research studies about automatic
detection. In addition, it produces accurate results in significantly less time than many algorithms. In this
way, we show that CAVAM model can be adapted to other methods for object detection besides Scale-Invariant
Feature Transform (SIFT), as it was originally proposed. Additionally, we use TUD dataset image sequences to
evaluate and compare our approach with the original HOG algorithm. These experiments show that our method
achieves around 2x speed-up at just 2% decreased accuracy. Moreover, the proposed approach can improve precision
and specificity by more than 2%.
•
TL;DR: The impulse response of AFM is determined using experimental results gathered from measuring the cylindrical sample via AFM and the Lucy- Richardson algorithm is used to calculate the deconvolution between the resultant AFM impulse response and the blurred AFM image.
Abstract: The atomic force microscope is a very useful tool for use in biology and in nano-technology, since it
can be used to measure a variety of objects such as cells and nano-particles in a variety of different
environments. However, the images produced by the AFM are distorted and do not accurately represent the true
shape of the measured cells or particles, even though many researchers do not take this fact into account. In
this paper we determine the impulse response of AFM using experimental results gathered from measuring the
cylindrical sample via AFM. Once the AFM impulse response is estimated, the Lucy- Richardson algorithm is
used to calculate the deconvolution between the resultant AFM impulse response and the blurred AFM image.
This produces a more accurate AFM image. Also in this paper, we compare raw experimental AFM images
with the Restored AFM images quantitively and the proposed algorithm is shown to provide superior
performance.
•
TL;DR: A robust and lightweight multi-resolution method for vehicle detection using local binary patterns (LBP) as channel feature using LBP histograms instead of multi-scale feature maps and by extrapolating nearby scales to avoid computing each scale.
Abstract: Multi-resolution object detection faces several drawbacks including its high dimensionality produced by
a richer image representation in different channels or scales. In this paper, we propose a robust and lightweight
multi-resolution method for vehicle detection using local binary patterns (LBP) as channel feature. Algorithm
acceleration is done using LBP histograms instead of multi-scale feature maps and by extrapolating nearby scales
to avoid computing each scale. We produce a feature descriptor capable of reaching a similar precision to other
computationally more complex algorithms but reducing its size from 10 to 800 times. Finally, experiments show
that our method can obtain accurate and considerably faster performance than state-of-the-art methods on vehicles
datasets.
•
TL;DR: A class of Bernstein basis functions with a shape parameter is constructed, which has most properties of the classical Bezier curve and can be adjusted by altering value of the shape parameter when the control points are fixed.
Abstract: By extending definition interval of the classical Bernstein basis functions to be dynamic, a class of
Bernstein basis functions with a shape parameter is constructed in this work. The new basis functions are
simple extension of the classical Bernstein basis functions. Then the corresponding Bezier-like curve is
generated on base of the introduced basis functions. The new curve not only has most properties of the classical
Bezier curve, but also can be adjusted by altering value of the shape parameter when the control points are
fixed. Because the proposed curve is a polynomial model of the same degree and having most properties of the
classical Bezier curve, it has more advantages than some existing similar models.
•
TL;DR: Singular value decomposition (SVD) and unscented Kalman filter (UKF) are interlaced using the======Euler angles as the attitude parameter in order to estimate the satellite s angular motion parameters about center of mass.
Abstract: Singular Value Decomposition (SVD) and unscented Kalman filter (UKF) are interlaced using the
Euler angles as the attitude parameter in order to estimate the satellite s angular motion parameters about center
of mass. Magnetometer and sun sensor are used as the vector measurements for SVD in addition to the angular
ate measurements from rate gyro for UKF; therefore, the output of the SVD shaped the nontraditional
approach as SVD aided UKF algorithm using the linear measurements.
•
TL;DR: Adaptive algorithms of spline-wavelet decomposition in a linear space over metrized fields are proposed in this paper to provide a priori given estimate of the deviation of the main flow from the initial one.
Abstract: Adaptive algorithms of spline-wavelet decomposition in a linear space over metrized fields are proposed.
The algorithms provide a priori given estimate of the deviation of the main flow from the initial one. Comparative
estimates of data of the main flow under different characteristics of the irregularity of the initial flow are done.
The limiting characteristics of data, when the initial flow is generated by abstract differentiable functions, are
discussed. The constructions of adaptive grid and pseudo-equidistant grid and relative quantity of their knots
are considered, flows of elements of linear normed spaces and formulas of decomposition and reconstruction are
discussed. Wavelet decomposition of the flows is obtained with using of spline-wavelet decomposition. Sufficient
condition of the construction is obtained. Applications to different spaces of matrix of fixed order and to spaces of
infinite-dimension vectors with numerical elements (rational, real, complex and p-adic elements) are considered