scispace - formally typeset
Search or ask a question

Showing papers in "WSEAS Transactions on Signal Processing archive in 2017"


Journal Article
TL;DR: In this paper, the authors presented technical aspects of HEVC thought objective and subjective performance analysis of different versions of HM software test models in different configurations in the Main profile, and compared two models HM-16.12 and HM-6.6 through three fundamental parameters: signal-to-noise ratio, bit rate and time saving.
Abstract: High Efficiency Video Coding (HEVC) is the state-of-art-video coding standard which significantly improved the coding efficiency of a coded video signal with its preceding video coding standards. This paper presents technical aspects of HEVC thought objective and subjective performance analysis of different versions of HM software test models in different configurations in Main profile. We compared two models HM-16.12 and HM-16.6 through three fundamental parameters: signal-to-noise ratio, bit rate and time saving, while two test sequences in different resolutions are processed. Simulations results have shown that we have none, small or obvious differences in SNR values and bit rate, while encoding time saving is increased from 13,5% up to 48,3% depending on configurations and tested sequences. Beside objective results, subjective video assessments for all tested sequences and configurations are presented, too.

10 citations


Journal Article
TL;DR: A colour segmentation algorithm that works directly in RGB colour space without converting the colour space and depicts the RBFN outperforming ANFIS with remarkable margins is presented.
Abstract: Skin colour detection has been a commendable technique due to its wide range of application in both analyses based on diagnostic and human computer interactions. Various problems could be solved by simply providing an appropriate method for pixel-like skin parts. Presented in this study is a colour segmentation algorithm that works directly in RGB colour space without converting the colour space. Genfis function as explored in this study formed the Sugeno fuzzy network and utilizing Fuzzy C-Mean (FCM) clustering rule, clustered the data and for each cluster/class a rule is generated. Also, the Radial Basis Function (RBF) utilized Gaussian function for grouping. Finally, corresponding output from data mapping of pseudopolynomial is obtained from input dataset to the adaptive neuro fuzzy inference system (ANFIS), while the Euclidean distance performed data mapping in the RBF model. The result obtained from these two algorithms depicts the RBFN outperforming ANFIS with remarkable margins.

8 citations


Journal Article
TL;DR: The excellent experimental results demonstrated that the kernel based SVM models provide a promising solution to high-dimensional data sets with limited training samples.
Abstract: In this paper, we propose a kernel based SVM algorithm with variable models to adapt to the high-dimensional but relatively small samples for remote explosive detection on photo-thermal infrared imaging spectroscopy (PT-IRIS) classification The algorithms of the representative linear and nonlinear SVM are presented The response plot, predicted vs actual plot, and residuals plot of the linear, quadratic, and coarse Gaussian SVM are demonstrated A comprehensive comparison of Linear SVM, Quadratic SVM, Cubic SVM, Fine Gaussian SVM, Median Gaussian SVM, Coarse Gaussian SVM is performed in terms of root mean square error, R-squared, mean squared error, and mean absolute error The excellent experimental results demonstrated that the kernel based SVM models provide a promising solution to high-dimensional data sets with limited training samples

7 citations


Journal Article
TL;DR: The linear and nonlinear to analysis the HRV is presented, showing the sensitivity, specificity and predictive values of heart rate variability regarding death or morbidity in cardiac and non-cardiac patients.
Abstract: Heart rate variability (HRV) is a measure of the balance between sympathetic mediators of heart rate that is the effect of epinephrine and norepinephrine released from sympathetic nerve fibres acting on the sino-atrial and atrio-ventricular nodes which increase the rate of cardiac contraction and facilitate conduction at the atrio-ventricular node and parasympathetic mediators of heart rate that is the influence of acetylcholine released by the parasympathetic nerve fibres acting on the sino-atrial and atrio-ventricular nodes leading to a decrease in the heart rate and a slowing of conduction at the atrio-ventricular node. Sympathetic mediators appear to exert their influence over longer time periods and are reflected in the low frequency power(LFP) of the HRV spectrum (between 0.04Hz and 0.15 Hz).Vagal mediators exert their influence more quickly on the heart and principally affect the high frequency power (HFP) of the HRV spectrum (between 0.15Hz and 0.4 Hz). Thus at any point in time the LFP:HFP ratio is a proxy for the sympatho- vagal balance. Thus HRV is a valuable tool to investigate the sympathetic and parasympathetic function of the autonomic nervous system. Study of HRV enhance our understanding of physiological phenomenon, the actions of medications and disease mechanisms but large scale prospective studies are needed to determine the sensitivity, specificity and predictive values of heart rate variability regarding death or morbidity in cardiac and non-cardiac patients. This paper presents the linear and nonlinear to analysis the HRV.

4 citations


Journal Article
TL;DR: The algorithm of abuts contours detection and refinement relies on the modified radial symmetry object detection algorithm and meets the requirements of the industry standards so the method of the log piles photogrammetry measurement using the developed algorithm can be successfully applied in the activity of forest enterprises.
Abstract: This paper is devoted to the investigation and development of the algorithm for the log pile photogrammetry measurement on the basis of abuts detection and calculation of their diameters. The algorithm of abuts contours detection and refinement relies on the modified radial symmetry object detection algorithm. The combination of the following methods is implemented at the further stages of the pile measurement algorithm: meanshift clustering, Delaunay triangulation, Boruvka's minimum spanning tree algorithm, watershed and Boykov-Kolmogorov graph cut algorithm. These methods were adapted to the specific of the given task. The testing of the resulting algorithm gives its TPR value at 96,2% which is much higher than other unsupervised training methods. The average error of the algorithm for the log pile photogrammetry measurement in comparison with manual measurement is less than 9.2%. It meets the requirements of the industry standards so the method of the log piles photogrammetry measurement using the developed algorithm can be successfully applied in the activity of forest enterprises.

4 citations


Journal Article
TL;DR: In this article, the authors proposed an efficient multi-channel speech enhancement approach, based on the idea of adding a pre-treatment preceding the speech enhancement via a multichannel method.
Abstract: In this paper, we propose an efficient multi-channel speech enhancement approach, based on the idea of adding a pre-treatment preceding the speech enhancement via a multi-channel method. This approach consists at first step in applying mono-channel speech enhancement method to process each noisy speech signal independently and then applying a multi-channel method based on the delay estimation and the blind Speech Separation in order to obtain the enhanced speech. Our idea is to apply a different class of mono-channel method in order to compare between them and to find the best combination that can remove a maximum noise without introducing artifacts. We resort the use of two classes of algorithms: the spectral subtraction and the statistical model based methods. In order to evaluate our proposed approach, we have compared it with our multi-channel speech enhancement method without a preprocessing. Our evaluation that was performed on a number of records corrupted by different types of noise like white, Car and babble shows that our proposed approach provides a higher noise reduction and a lower signal distortion.

2 citations


Journal Article
TL;DR: The spectrograms and wavelet decompositions and spectra are shown for a few EEG sequences with typical pathological patterns, to prove the possibility of classification based on EEG spectrum.
Abstract: In this paper we apply some signal processing methods to detect and classify specific patterns present in EEG signal, which give information about the inset of brain disorders, in particular epileptic activity. We analyze EEG signals using spectral analysis methods, namely Short-Time Fourier Transform and Discrete Wavelet Transform, applied to several sets of EEG recordings. The spectrograms and wavelet decompositions and spectra are shown for a few EEG sequences with typical pathological patterns, to prove the possibility of classification based on EEG spectrum.

2 citations


Journal Article
TL;DR: The main contribution of the paper is a method to configure the token bucket parameters using the video characteristics to mitigate the effects of network impairments in viewer’s perceived quality of video streaming over IP.
Abstract: In this paper, we propose the use of traffic shaping to mitigate the effects of network impairments in viewer’s perceived quality of video streaming over IP. Traffic shaping is used to change the burstiness of video considering the characteristics of MPEG-4 encoding. In MPEG-4, the bursts of traffic are caused by variable length of frames. I-frames are very important in image reconstruction and produce the biggest bursts, so the packets carrying the I-frames are more likely to experience higher delay or be discarded. We propose a method of shaping the video traffic to distribute the bursts over time as much as possible. This procedure reduces the negative effects of bursts on the viewer’s perceived quality. We choose to use the token bucket algorithm due to its low computational complexity and wide availability in servers and routers. The main contribution of the paper is a method to configure the token bucket parameters using the video characteristics. The efficiency of the proposed method is demonstrated through computer simulations, and the results indicate that the proposed method effectively mitigates the effects of network impairments in viewer’s perceived quality.

2 citations


Journal Article
TL;DR: In this article, the authors designed a reverberation acoustic chamber to stimulate as many mode as possible over the frequency spectrum in line to simulate the launching stage and define strength and stress on the test object.
Abstract: Reverberation acoustic chamber has been designed to stimulate as many mode as possible over the frequency spectrum in line to simulate the launching stage and define strength and stress on the test object. Launching stage is producing highly intensity of acoustic pressure by the propulsion system to the launcher’s payload and causing numerous of stress and constraints. Knowing the capabilities of the chamber be able to stretch its characteristic thru analysis and calculation. Reverberation time, Tr has been completed by integrated impulse response method. Additional analysis for repeatability measurement can be performed in order to know the exact value for each central frequency Tr. For the chamber diffusivity, number of acoustic mode, frequency response, and natural resonance frequency (MAXTIQ) has been defined. In succession to have sufficient analysis on total spread of frequency spectrum, the modal density, MD and modal analysis, MS has been included. Results shown that chamber can fulfilled in providing the reverberation acoustic test. Most of the chamber resonance cater at least one mode in the half power band width of the test object. Lower frequencies is well covered eventhough it is limited characteristic. Based on the analysis the absorption coefficient are in reasonable arrangement and the calculated equivalent sound absorption area are none exceeds the maximum limit for a chamber volume of 999.5m3 . There is lessen absorption factor when the environment are totally dry air. ANGKASA’s RATF OASPL has the capability to meet the maximum 155dB requirement. Further analysis is suggested with different arrangement of OASPL in the chamber for pattern evaluation.

1 citations


Journal Article
TL;DR: This paper proposes a Parallel Bit Plane Coding (BPC) architecture in which three coding passes operate in parallel and are allowed to progress independently.
Abstract: Embedded block coding with optimized truncation (EBCOT) is a key algorithm in digital cinema (DC) distribution system. Though several high speed EBCOT architectures exist, all are not capable of meeting the DC specifications. With the augmentation in multimedia technology, demand for high speed real time image compression system has also increased. JPEG2000 is a relatively new image compression standard which builds and improves on its predecessor JPEG. In Jpeg 2000 the embedded Block Coding with Optimal Truncation (EBCOT) is the most important element to calculate the very hard portion in the compressing process of JPEG 2000 image compression standard. This paper proposes a Parallel Bit Plane Coding (BPC) architecture in which three coding passes operate in parallel and are allowed to progress independently.

1 citations


Journal Article
TL;DR: An easy calibration method for calculating the internal parameters: pixel dimensions and image center pixel coordinates is presented and it is shown that the method is slightly easier if the camera rotation angles, in relation with the general referential system, are small.
Abstract: The fundamental matrix, based on the co-planarity condition, even though it is very interesting for theoretical issues, it does not allow finding the camera calibration parameters, and the base and rotation parameters altogether. In this work we present an easy calibration method for calculating the internal parameters: pixel dimensions and image center pixel coordinates. We show that the method is slightly easier if the camera rotation angles, in relation with the general referential system, are small. The accuracy of the four calibration parameters are evaluated by simulations. In addition, a method to improve the accuracy is explained. When the calibration parameters are known, the fundamental matrix can be reduced to the essential matrix. In order to find the relative orientation parameters in stereo vision, there is also presented a new method to extract the base and the camera rotation by means of the essential matrix. The proposed method is simple to implement. We also include a simpler method for the relative orientation when the relative rotation angles between the two cameras are small.

Journal Article
TL;DR: In this article, the real and imaginary parts from the magnitude responses for causal linear time-invariant systems having monotonic impulse responses were determined by discrete-time Mellin convolution filters processing geometrically sampled magnitude responses.
Abstract: The paper is devoted to the determination of the real and imaginary parts from the magnitude responses for causal linear time-invariant systems having monotonic impulse responses. We demonstrate that the problem can be considered as a special filtering task in the Mellin transform domain having a diffuse magnitude response. The theoretical background is given for the separating the magnitude response into the real and imaginary parts by discrete-time Mellin convolution filters processing geometrically sampled magnitude responses and the appropriate finite impulse response (FIR) filters are designed. To compensate exponential shortening frequency ranges of the real and imaginary parts due to the end-effects of FIR filters processing geometrically sampled magnitude responses, the multiple filtering mode is used, where the sets of the first and last input samples are repeatedly processed by the filters having impulse responses with the shifted origins, which gradually vary the number of coefficients with negative and positive indices on each side of the origin. The performance of the designed filters are evaluated in terms of the accuracy of the generated real and imaginary parts and the noise amplification.

Journal Article
TL;DR: The proposed hardware implementation method has a high degree of noise cancellation performance and the detailed structure of the adaptive noise cancellation system is illustrated.
Abstract: This paper presents the design and implementation of an adaptive filter using the state-of-the-art Xilinx Vivado software/hardware co-design concepts and tools. A desired signal corrupted by the environment can often be recovered by an adaptive noise canceller using the least mean squares (LMS) algorithm. The detailed structure of the adaptive noise cancellation system is illustrated. The adaptive parameters of the least-mean-square based adaptive filter system are obtained using the MATLAB/Simulink model. RTL design is generated by converting LMS design in Simulink to an Intellectual Property (IP) Core using HDL Coder Support. A complete system of Filter based on Zynq board target architecture is designed using Vivado Synthesis Design and VHDL target language. The IP Core is adopted in Vivado Synthesis and implementation. Finally, the debugger is run before the audio file was fed in Zedboard development board for test. Experimental results show that the proposed hardware implementation method has a high degree of noise cancellation performance

Journal Article
TL;DR: A synchronous walking sensing system is developed, in which a pair of acceleration and angular velocity sensors are attached to left and right shoes of a walking person and their data are transmitted to a PC through a wireless channel to obtain precise stepping patterns.
Abstract: Gait analysis plays an important role in characterizing individuals and each condition and gait analysis systems have been developed using various devices or instruments. However, most systems do not catch synchronous stepping actions between right foot and left foot. For obtaining a precise gait pattern, a synchronous walking sensing system is developed, in which a pair of acceleration and angular velocity sensors are attached to left and right shoes of a walking person and their data are transmitted to a PC through a wireless channel. Walking data from 19 persons of the age of 14 to 20 are acquired for walking analysis. Stepping time diagrams are extracted from the acquired data of right and left foot actions of stepping-off and-on the ground, and the time interval analyses distinguish between an ordinary person and a person injured on left leg, and a stepping recovery process of the injured person is shown. Synchronous sensing of stepping action between right foot and left foot contributes to obtain precise stepping patterns.

Journal Article
TL;DR: A novel PAPR reduction method from geometric angle analysis is proposed which keeps the EVM and bit-error-rate (BER) performance intact and should vastly improve the performance of OFDM signal in communication system.
Abstract: The main disadvantage of Orthogonal Frequency Division Multiplexing (OFDM) signal is the high peak-to-average power ratio (PAPR) which influences the system power efficiency and system performance in the presence of nonlinearities within the high power amplifier (HPA) The error vector magnitude (EVM) is one of the performance metrics by communications standards in OFDM system In this paper, a novel EVM reduction method from geometric angle analysis is proposed which keeps the bit-error-rate (BER) performance after PAPR reduction In our method, a threshold vector circle is designed in frequency domain in order to adjust the amplitude and phase of the OFDM signal constellation points to near the ideal points Simulation results show that PAPR of a QPSK modulated OFDM signal is reduced from 1056dB to 7496dB with an EVM reduction of 234% This technique should vastly improve the performance of OFDM signal in communication system

Journal Article
TL;DR: In this paper, a core chip design in SiGe Heterojunction Bipolar Transistor base for X-band phase shifters is presented, which consists of a series number LPF and HPF filters.
Abstract: This paper presents a core chip design in SiGe Heterojunction Bipolar Transistor base for X-band phased array Transmit/Receive (T/R) module. Usually phase shifters for X-band application were done by using of GaAs technique. Some commercial GaAs products for this type of integrate circuits are considered. The structure of the Core Chip for Phased Array T/R Modules is presented. Methods for the formation of a phase delay for X-band phase shifters are considered. An original differential design of SiGe core chip for X-band is presented. The advantages of SiGe technique is observed. The schematic of 5 bits phase shifter and attenuator are designed. It consist of a series number LPF and HPF filters. Gain of phase shifter is 1.5 dB. Attenuator has the adjustment range from 0 to 24dB. Linear output power of the core chip is 5dBm. The total consumed current of the device is 158mA, at 5V power supply.

Journal Article
TL;DR: This paper presents an adaptive IIR system identification method using Particle Swarm Optimization (PSO), where the particle’s velocities are updated using plural better solutions in order to avoid the convergence to local optimal solution and the output signal of an unknown system is used as the feedback signal of the adaptive filter in to achieve stable system identification.
Abstract: This paper presents an adaptive IIR system identification method using Particle Swarm Optimization (PSO). System identification is a method for estimating characteristic of an unknown system using the measured input and output signals. In PSO, potential solutions called particles are updated according to simple mathematical formulas of particle’s positions and velocities. However, the IIR system identification methods using PSO have a problem that it is very difficult to get the global optimum solution when the adaptive filter becomes once unstable during system identification. Moreover, the standard PSO has a problem that it tends to converge to local optimal solution because of its strong directivity. In the proposed method, the particle’s velocities are updated using plural better solutions in order to avoid the convergence to local optimal solution and the output signal of an unknown system is used as the feedback signal of the adaptive filter in order to achieve stable system identification. Some simulation results show that the proposed method has higher identification accuracy than conventional methods.

Journal Article
TL;DR: This work proposes new method, called Sliding Recursive Hierarchical Adaptive PCA, based on image sequence processing in a sliding window, which decreases the number of calculations needed, and permits parallel implementation, and facilitates its application in the real-time processing of 3D tensor images.
Abstract: The famous method Principal Components Analysis (PCA) is the basic approach for decomposition of 3D tensor images (for example, multi- and hyper-spectral, multi-view, computer tomography, video, etc.). As a result of the processing, their information redundancy is significantly reduced. This is of high importance for the efficient compression and for the reduction of the features space needed, when object recognition or search is performed. The basic obstacle for the wide application of PCA is the high computational complexity. One of the approaches to overcome the problem is to use algorithms, based on the recursive PCA. The well-known methods for recursive PCA are aimed at the processing of sequences of images, represented as non-overlapping groups of vectors. In this work is proposed new method, called Sliding Recursive Hierarchical Adaptive PCA, based on image sequence processing in a sliding window. The new method decreases the number of calculations needed, and permits parallel implementation. The results obtained from the algorithm simulation, confirm its efficiency. The lower computational complexity of the new method facilitates its application in the real-time processing of 3D tensor images.

Journal Article
TL;DR: This paper presents a design and implementation of a real-time, vision-based target tracking system for unmanned aerial vehicle (UAV) integrated with Lucas-Kanade optical flow technique to predict and correct the state of the moving target based on its dynamic and observation models.
Abstract: This paper presents a design and implementation of a real-time, vision-based target tracking system for unmanned aerial vehicle (UAV). The particle filter framework integrated with Lucas-Kanade optical flow technique to predict and correct the state of the moving target based on its dynamic and observation models. The optical flow estimates the corresponding feature points in the new image frame related to the previously detected/estimated points. The Maximum Likelihood Estimation SAmple Consensus (MLESAC) method is applied to estimate the ego-motion transformation matrix using the old and new sets of the feature points. This matrix is incorporated with the target dynamic model to give more accurate prediction results of its state. Two optimized types of features are extracted to build the target observation model. They include extended Haar-like rectangles and edge orientation histogram (EOH) features. A Gentle AdaBoost classifier is applied on these features to distinguish and choose the best predefined number of features that highly represent the target. The vectorization approach is used to reduce the calculation cost due to the matrix manipulations. The proposed tracking system is tested on different scenarios of the on-time modified VIVID database and achieved real time tracking speed with 95% successful tracking rate.

Journal Article
TL;DR: The results show improved performance obtained by the new structure in nonlinear channels by using a new modified back-propagation algorithm for a multilayer perceptron (MLP) based upon the previously introduced in [8], [9].
Abstract: In this work, a new training strategy using a new modified back-propagation (BP) algorithm for a multilayer perceptron (MLP) based upon the previously introduced in [8], [9] is proposed. Its performance is investigated and compared to those of MLP-DFE based on the standard BP algorithm and the previously introduced in [8], [9]. The results show improved performance obtained by the new structure in nonlinear channels.

Journal Article
TL;DR: The Probabilistic Matching Model for Binary Images (PMMBI) is presented, a model for the quick detection of dissimilar binary images based on random point mappings that shows that by performing a limited number of random pixel mappings between binary images, dissimilarity detection can be performed quickly.
Abstract: In this paper we present the Probabilistic Matching Model for Binary Images (PMMBI), a model for the quick detection of dissimilar binary images based on random point mappings. The model predicts the probability of detecting dissimilarity between any pair of binary images based on the amount of similarity and number of random pixel mappings between them. Based on the model, we show that by performing a limited number of random pixel mappings between binary images, dissimilarity detection can be performed quickly. Furthermore, the model is image size invariant; the size of the image has absolutely no effect on the dissimilarity detection quickness. We give examples with real images to show the accuracy of the model.

Journal Article
TL;DR: Through experiments on synthetic vowels, it is shown that the proposed spectrum compensation method can estimate the power spectrum more accurately than the direct and pre-emphasis LP methods.
Abstract: This paper proposes a linear prediction (LP) method to estimate accurately the original power spectrum of the input speech signal. A prediction error filter (PEF) is used as a pre-processor, and the LP based power spectrum estimation is compensated by the frequency characteristics of the designed PEF. Through experiments on synthetic vowels, we show that the proposed spectrum compensation method can estimate the power spectrum more accurately than the direct and pre-emphasis LP methods.

Journal Article
TL;DR: A novel scheme is proposed for recommending friends in social media, based on the analysis and vector mapping of online lifestyles, and results on real life data exhibit the promising performance of the proposed scheme.
Abstract: Several of the existing major social networking services such as Facebook and Twitter, recommend friends to their users based on social graphs analysis, or using simple friend recommendation algorithms such as similarity, popularity, or the “friend's friends are friends,” concept. However these approaches, even though intuitive and quick, they consider few of the characteristics of the social networks, while they are typically not the most appropriate ways to reflect a user’s preferences on friend selection in real life. To overcome these problems in this paper a novel scheme is proposed for recommending friends in social media, based on the analysis and vector mapping of online lifestyles. In particular for each user a vector is created that captures her/his online behavior. Then, in the simple case, vector matching is performed so that the top matches are selected as potential friends. In a more sophisticated case, the most similar profiles to the user under investigation are detected and a collaborative recommendations approach is proposed. Experimental results on real life data exhibit the promising performance of the proposed scheme.

Journal Article
TL;DR: A new approach is proposed named as Event WebClickviz that performs the dual functions of visualization and behavioural analysis based on which the events are detected with high efficiency and they are visualized better using the proposed model.
Abstract: Event detection from online social networks based on the user behaviour has been a research area which has garnered immense attention in the recent years. Many works have been developed for event detection in multiple social media sources like Twitter, Facebook, YouTube, etc. The user updates including short texts, photos and videos can be utilized in detecting the events. However detecting the number of common events from the social media content requires efficient distinguishing as the size of the content and number of users is large, leading to large data. In this paper, a new approach is proposed named as Event WebClickviz that performs the dual functions of visualization and behavioural analysis based on which the events are detected. In this approach, the event detection problem is modelled as clustering problem. Named Entity recognition with Topical PageRank is employed for extracting the key terms in the texts while the temporal sequences of real values are estimated to build the event sequences. The features are extracted by applying the concept of sentiment analysis using term frequency–inverse document frequency (TF-IDF). Based on these features the content is clustered using Hierarchical Agglomerative clustering algorithm. Thus the event is detected with high efficiency and they are visualized better using the proposed model. The simulation results justify the performance of the proposed Event WebClickviz.

Journal Article
TL;DR: The aim of this work is to develop an interpolation scheme for the purpose of reducing these artifacts in the input image, and consequently preserve the sharpness of the edges in the image interpolation.
Abstract: The interpolation task plays a key role in the reconstruction of high-resolution image quality in superresolution algorithms. In fact, the foremost shortcoming encountered in the classical interpolation algorithms, is that they often work poorly when used to eliminate blur and noise in the input image. In this sense, the aim of this work is to develop an interpolation scheme for the purpose of reducing these artifacts in the input image, and consequently preserve the sharpness of the edges. The proposed method is based on the image interpolation, and it is started by the estimation of the edges directions using the Laplacian operator, and then interpolated the missing pixels from the strong edge by using the cubic convolution interpolation. We begin from a gray high-resolution image that is down-sampled by a factor of two, to obtain the low-resolution image, and then reconstructed using the proposed interpolation algorithm. The method is implemented and tested using several gray images and compared to other interpolation methods. Simulation results show the performance of the proposed method over the other methods of image interpolation in both PSNR, and two perceptual quality metrics SSIM, FSIM in addition to visual quality of the reconstructed images results.

Journal Article
TL;DR: A new developed Enhanced Multidimensional Hadamard Error Correcting Code (EMHC), which is based on well known hadamard Code, is introduced and his performance with Reed-Solomon Code is compared regarding its ability to preserve watermarks in the embedded video.
Abstract: Watermarking technology.play a central role in the digital right management for multimedia data. Especially a video watermarking is a real challenge, because of very high compression ratio (about 1:200). Normally the watermarks can barely survive such massive attacks, despite very sophisticated embedding strategies. It can only work with a sufficient error correcting code method. In this paper, the authors introduce a new developed Enhanced Multidimensional Hadamard Error Correcting Code (EMHC), which is based on well known Hadamard Code, and compare his performance with Reed-Solomon Code regarding its ability to preserve watermarks in the embedded video. The main idea of this new developed multidimensional Enhanced Hadamard Error Correcting Code is to map the 2D basis images into a collection of one-dimensional rows and to apply a 1D Hadamard decoding procedure on them. After this, the image is reassembled, and the 2D decoding procedure can be applied more efficiently. With this approach, it is possible to overcome the theoretical limit of error correcting capability of (d-1)/2 bits, where d is a minimum Hamming distance. Even better results could be achieved by expanding the 2D to 3D EMHC. A full description is given of encoding and decoding procedure of such Hadamard Cubes and their implementation into video watermarking procedure.To prove the efficiency and practicability of this new Enhanced Hadamard Code, the method was applied to a video Watermarking Coding Scheme. The Video Watermarking Embedding procedure decomposes the initial video through Multi-Level Interframe Wavelet Transform. The low pass filtered part of the video stream is used for embedding the watermarks, which are protected respectively by Enhanced Hadamard or Reed-Solomon Correcting Code. The experimental results show that EHC performs much better than RS Code and seems to be very robust against strong MPEG compression.

Journal Article
TL;DR: A novel approach based on artificial bee colony algorithm is described and applied to the design of adaptive IIR filters and its performance is compared to that of differential evolution (DE) and particle swarm optimization (PSO) algorithms.
Abstract: The theory and design of adaptive finite impulse response (FIR) filters are well developed and widely applied in practice due to their simple analytic description of error surfaces and intrinsic stable behavior. However, the studies on adaptive infinite impulse response (IIR) filters are not as common as adaptive FIR filters. The reason is that there are two main drawbacks in the design of adaptive IIR filters: stability during the adaptation process may not be ensured in some applications and the convergence to the optimal design is not always guaranteed because of their multi-modal error surface structures. In order to overcome these difficulties, global optimization based approaches are used in adaptive IIR filter design. One of the most recently proposed swarm intelligence based global optimization algorithms is the artificial bee colony (ABC) algorithm which simulates the intelligent foraging behavior of honeybee swarms. In this work, a novel approach based on artificial bee colony algorithm is described and applied to the design of adaptive IIR filters and its performance is compared to that of differential evolution (DE) and particle swarm optimization (PSO) algorithms.

Journal Article
TL;DR: Experimental results show that the approach for resolution variation video-based face recognition system using the combination of local binary pattern (LBP), principal component analysis (PCA) and feed forward neural network (FFNN) achieves better performance than other video- based face recognition algorithms on challengingresolution variation video face databases and thus advancing the state-of-the-art.
Abstract: Video-based face recognition is a very challenging problem as there is a variation in resolution, illumination, pose, facial expressions and occlusion. In this paper, we have presented an approach for resolution variation video-based face recognition system using the combination of local binary pattern (LBP), principal component analysis (PCA) and feed forward neural network (FFNN). We used, standard as well as created database. The main purpose of this paper is to evaluate the performance of the system. To the best of our knowledge this is the first work addressing the issue of resolution variation for video-based face recognition with this approach. We have experimented with three different video face databases (Created database, NRC_IIT & HONDA/UCSD) and compared with benchmark methods. Experimental results show that our system achieves better performance than other video-based face recognition algorithms on challenging resolution variation video face databases and thus advancing the state-of-the-art.

Journal Article
TL;DR: This paper proposes a method to control an object on a screen by extracting information about a hand with a depth camera by employing a convex hull to set a reference coordinate of the hand within the coordinate.
Abstract: This paper proposes a method to control an object on a screen by extracting information about a hand with a depth camera. The proposed method employs an appropriate rectangular region, then extracts the hand by setting the arm as a certain size through vector information. The method employs a convex hull to set a reference coordinate of the hand within the coordinate. The object on the screen is moved or scaled by verifying whether the thumb and index finger are stuck together or not through the extraction of the outline of the hand. The location of the hand is represented through a library capable of implementing 3D. An experiment was conducted with the proposed method and verified the operation of the command on the object.

Journal Article
TL;DR: Two types of algorithms are proposed for estimating the parameters of third-order Volterra-Parafac models when input-output signals and kernel coefficients are real valued and the first is called Levenberg-Marquardt algorithm and the second is the Partial Update LMS algorithms.
Abstract: Volterra models are very useful for representing nonlinear systems with vanishing memory. The main drawback of these models is their huge number of parameters to be estimated. In this paper, we present a new class of Volterra models, called Volterra-Parafac models, with a reduced parametric complexity, by considering Volterra kernels of order (p > 2) as symmetric tensors and by using a parallel factor (PARAFAC) decomposition. This paper is concerned with the problem of identification of third-order Volterra-PARAFAC models. Two types of algorithms are proposed for estimating the parameters of these models when input-output signals and kernel coefficients are real valued. The first is called Levenberg-Marquardt algorithm and the second is the Partial Update LMS algorithms. Some simulation results illustrate the proposed identification methods.