scispace - formally typeset
Search or ask a question

Showing papers in "WSEAS Transactions on Signal Processing archive in 2010"


Journal Article
TL;DR: An edge detection technique that is based on ACO is presented, which establishes a pheromone matrix that represents the edge information at each pixel based on the routes formed by the ants dispatched on the image.
Abstract: Ant colony optimization (ACO) is a population-based metaheuristic that mimics the foraging behavior of ants to find approximate solutions to difficult optimization problems. It can be used to find good solutions to combinatorial optimization problems that can be transformed into the problem of finding good paths through a weighted construction graph. In this paper, an edge detection technique that is based on ACO is presented. The proposed method establishes a pheromone matrix that represents the edge information at each pixel based on the routes formed by the ants dispatched on the image. The movement of the ants is guided by the local variation in the image's intensity values. The proposed ACO-based edge detection method takes advantage of the improvements introduced in ant colony system, one of the main extensions to the original ant system. Experimental results show the success of the technique in extracting edges from a digital image.

72 citations


Journal Article
TL;DR: 2D images of fetal phantom were taken by using untracked free-hand ultrasound system and it can be concluded that the marching cube algorithm can give a better result compare to contour filtering where marching cubes algorithm can generate higher intensity 3D image which can make user easy to detect inner part and edges of 3D images.
Abstract: Three dimensional (3D) ultrasound image reconstruction based on two dimensional (2D) images has become a famous method for analyzing some anatomy related to abnormalities. 3D ultrasound image reconstruction system is required in order to view the specific part of the object and so that it can be used for analysis purpose. In this paper, 2D images of fetal phantom were taken by using untracked free-hand ultrasound system. Few sets of 2D images were taken with different number of slices and after some basic 2D image processing, 3D reconstruction is done by using surface rendering techniques by implementing contour filtering and marching cubes algorithm in Visual C++ 6.0 with Visualization Toolkit (VTK) toolbox. From the experiment, we can conclude that in order to reconstruct a better 3D image, the aid of tracking sensor is important. Besides, image processing need to be performed thoroughly by adding other detailed processing techniques so that noises can be fully removed. From the result also, it can be concluded that the marching cube algorithm can give a better result compare to contour filtering where marching cubes algorithm can generate higher intensity 3D image which can make user easy to detect inner part and edges of 3D images. The number of slices should also be increased to improve the accuracy of the 3D image constructed.

18 citations


Journal Article
TL;DR: This paper deals with an optimal location and path-finding algorithm in a real general environment for autonomous motion of unmanned ground vehicles and defines basic principles of this algorithm and its mathematical design.
Abstract: This paper deals with an optimal location and path-finding algorithm in a real general environment. This algorithm is designed for autonomous motion of unmanned ground vehicles. Importance of this problem solution is connected with the wide application of unmanned vehicles in the modern world. The article is divided into three basic parts and covers a general discussion of the problem, definition of basic principles of our optimal pathfinding algorithm, an issue of optimum maneuver in a general environment on a local and global level, and a mathematical design for our algorithm.

13 citations


Journal Article
TL;DR: To find the practical impact of a sound control room on the acoustical parameter values of a concert hall when played back in that control room, combinations of concert hall impulse responses andsound control room impulse responses have been investigated.
Abstract: Live recordings of music and speech in concert halls have acoustical properties, such as reverberation, definition, clarity and spaciousness. Sound engineers play back these recordings through loudspeakers in sound control rooms for audio CD or film. The acoustical properties of these rooms influence the perceived acoustics of the live recording. To find the practical impact of 'room in room' acoustics in general, combinations of random room acoustic impulse responses using convolution techniques have been investigated. To find the practical impact of a sound control room on the acoustical parameter values of a concert hall when played back in that control room, combinations of concert hall impulse responses and sound control room impulse responses have been investigated. It is found that to accurately reproduce a steady sound energy decay rate (related to the reverberation time), the playback room should have at least twice this decay rate, under diffuse sound field conditions. For energy modulations (related to speech intelligibility) this decay rate should be more than four times higher. Finally, initial energy ratios (related to definition and clarity) require auditive judgement in the direct sound field. ITU-recommendations used for sound control room design are sufficient for reverberation and speech intelligibility judgement of concert hall recordings. Clarity judgement needs a very high decay rate, while judgement of spaciousness can only be done by headphone.

10 citations


Journal Article
Jalal Karam1
TL;DR: Various speech processing techniques in time, time-frequency and time-scale domains for the purposes of recognition and compression are displayed and the role of Wavelet Transforms in recognizing and compressing speech signals is emphasized.
Abstract: In this paper, various speech processing techniques in time, time-frequency and time-scale domains for the purposes of recognition and compression are displayed. The examination of the human cochlea is included revealing practice of Wavelet Transform representation. The interchange between theory and application is displayed in a variety of work that have been accomplished in that direction. In particular, we emphasize the role of Wavelet Transforms in recognizing and compressing speech signals.

10 citations


Journal Article
TL;DR: A denoising method in two stages in a multi-wavelet context is proposed, based on diversification followed by wavelet fusion.
Abstract: We propose the use of a new implementation of the hyperanalytic wavelet transform, (HWT), in association with a Maximum a Posteriori (MAP) filter named bishrink. The denoising methods based on wavelets are sensitive to the selection of the mother wavelets. Taking into account the drawbacks of the bishrink filter and the sensitivity with the selection of the mother wavelets we propose a denoising method in two stages in a multi-wavelet context. It is based on diversification followed by wavelet fusion. Some simulation examples and comparisons prove the performances of the proposed method.

7 citations


Journal Article
TL;DR: LLE is extended by using kernel technique, which gives rises to the KLLE algorithm, which is firstly utilized to reduce data dimension and extract features from a high resolution range profile (HRRP).
Abstract: This paper presents a radar target recognition method using kernel locally linear embedding (KLLE) and a kernel-based nonlinear representative and discriminative (KNRD) classifier. Locally linear embedding (LLE) is one of the representative manifold learning algorithms for dimensionality reduction. In this paper, LLE is extended by using kernel technique, which gives rises to the KLLE algorithm. A KNRD classifier is a combined version of a kernel-based nonlinear representor (KNR) and a kernel-based nonlinear discriminaor (KND), two classifiers recently proposed for optimal representation and discrimination, respectively. KLLE is firstly utilized to reduce data dimension and extract features from a high resolution range profile (HRRP). Then, a KNRD classifier is employed for classification. Experimental results on measured profiles from three aircrafts indicate the relatively good recognition performance of the presented method.

6 citations


Journal Article
TL;DR: A method for invariant 2D object representation based on the Mellin-Fourier Transform (MFT), modified for the application, aimed at content-based object retrieval in large image databases.
Abstract: In this paper is presented a method for invariant 2D object representation based on the Mellin-Fourier Transform (MFT), modified for the application. The so obtained image representation is invariant against 2D rotation, scaling, and translation change (RST). The representation is additionally made invariant to significant contrast and illumination changes. The method is aimed at content-based object retrieval in large image databases. A new algorithm for fast closest vector search in the database is proposed as well. The experimental results obtained using the software implementation of the method proved the method efficiency. The method is suitable for various applications, such as detection of children sexual abuse in multimedia files, search of handwritten and printed documents, etc.

5 citations


Journal Article
TL;DR: In this paper, the authors investigated the correlation between the strain signals resulting from displacement and vibration responses when subjected to variable amplitude loading and found that the strain signal was linearly proportional to the vibration responses.
Abstract: This study specifically focuses on the coil spring, one of the automotive suspensions system parts. Under its service life, this component will be driven over different surface profiles that will give different displacements and vibration responses. This paper explores the correlation between the strain signals resulting from displacement and vibration responses when subjected to the variable amplitude loading. This comparative study was implemented using strain-life approach, (Ɛ-N) and Hybrid Integrated Kurtosis-based Algorithm for Z-notch filter Technique (Hybrid I-kaz). The Hybrid I-kaz method provides a two dimensional graphical representation of the measured strain and vibration signal and Hybrid I-kaz coefficient, Zh∞ in order to measure the degree of data scattering. An experiment has been performed on an automotive suspension system machine. This study was considered test signals which were excited based on ten different frequencies that were varied from 1 Hz to 10 Hz. A strain gauge of 5 mm in size and an accelerometer were mounted on the inner surface of the coil spring to measure the strain signals and vibration responses due to the loading. The time domain strain signals and vibration signals were then analysed based on the Coffin-Manson model for fatigue damage prediction and the Hybrid I-kaz method. The total fatigue damage and Hybrid I-kaz coefficients, Zh∞ for each signal of the different frequencies were compared in order to perform the correlation study. From the analysis, it was found that the strain signal was linearly proportional to the vibration responses.

5 citations


Journal Article
TL;DR: This approach affords the opportunity for the successful developing of various applications - image content protection; historical heritage protection; telemedicine and healthcare; transfer of confidential information with restricted access and many others.
Abstract: In the paper is presented one approach for image content protection based on layered still image decomposition with Inverse Difference Pyramid (IDP) and digital watermark insertion. Unlike the famous pyramid decompositions (Laplacian, Gaussian, etc.), the IDP starts from the pyramid top, which comprises smallest number of coefficients, and continues with the next decomposition layers (i.e., it represents the processed image with consecutive approximations of increasing quality). The new approach permits the insertion of multiple watermarks in the consecutive decomposition layers (resistant and fragile) and their reliable extraction by authorized users only. The fragile watermark is added as additional information in the corresponding decomposition layers. Any change in the extracted fragile watermark indicates unauthorized image editing. The resistant watermark embedding is performed in the image spectrum phase domain. For this, the decomposition is accomplished with Complex Hadamard Transform (CHT). The resistant watermark is inserted in the transform coefficients' imaginary part, which permits the insertion of relatively large amounts of watermark data in the protected image. The images are transformed into new format, based on the IDP decomposition. This approach affords the opportunity for the successful developing of various applications - image content protection; historical heritage protection; telemedicine and healthcare; transfer of confidential information with restricted access and many others.

4 citations


Journal Article
TL;DR: Through experiments, it is found that the weight of the lighter axle of the two-axle vehicle would become heavier than its static weight and the heavier axle would become lighter than itsstatic weight when the vehicle speed becomes faster.
Abstract: Through experiments we have found that the weight of the lighter axle of the two-axle vehicle would become heavier than its static weight and the heavier axle would become lighter than its static weight when the vehicle speed becomes faster. Based on this conclusion, a novel weighing method for vehicles with middle high speed is set up. It has also reduced the error caused by the impact of vehicle vibrating. System hardware, software and design ideas for vehicle Weigh-In-Motion (WIM) have been introduced in this article. The system also has the advantages of easy installation and portability.

Journal ArticleDOI
TL;DR: The simulation results showed that the Morlet wavelet was found to be a better approach for fatigue feature extraction and this fatigue data summarising algorithm can be used for accelerating the simulation works related to fatigue durability testing.
Abstract: The fatigue feature extraction using the Short-Time Fourier Transform (STFT) and wavelet transform approaches are presented in this paper. The transformation of the time domain signal into time-frequency domain computationally implemented using the STFT and Morlet wavelet methods provided the signal energy distribution display with respect to the particular time and frequency information. In this study, cycles with lower energy content were eliminated, and these selections were based on the signal energy distribution in the time representation. The simulation results showed that the Morlet wavelet was found to be a better approach for fatigue feature extraction. The wavelet-based analysis obtained a 59 second edited signal with the retention of at least 94 % of the original fatigue damage. The edited signal was 65 seconds (52 %) shorter than length of the edited signal that was found using the STFT approach. Hence, this fatigue data summarising algorithm can be used for accelerating the simulation works related to fatigue durability testing.

Journal Article
TL;DR: The proposed algorithm uses the temporal information of video and logical AND operation to remove most of irrelevant background and a window-based method by counting the black-and-white transitions is applied on the resulted edge map to obtain rough text blobs.
Abstract: This paper presents a robust and efficient text detection algorithm for news video. The proposed algorithm uses the temporal information of video and logical AND operation to remove most of irrelevant background. Then a window-based method by counting the black-and-white transitions is applied on the resulted edge map to obtain rough text blobs. Line deletion technique is used twice to refine the text blocks. The proposed algorithm is applicable to multiple languages (English, Japanese and Chinese), robust to text polarities (positive or negative), various character sizes (from 4×7 to 30×30), and text alignments (horizontal or vertical). Three metrics, recall (R), precision (P), and quality of bounding preciseness (Q), are adopted to measure the efficacy of text detection algorithms. According to the experimental results on various multilingual video sequences, the proposed algorithm has a 96% and above performance in all three metrics. Comparing to existing methods, our method has better performance especially in the quality of bounding preciseness that is crucial to later binarization process.

Journal Article
TL;DR: A novel transocding algorithm to insert random access points in the pre-encoded scalable video streams to reduce the computational complexity significant compared with FDR and CPDT method is proposed.
Abstract: In video applications including video broadcasting, interactive TV and IPTV, video streams are transmitted over various networks. In order to fulfill the requirement of video devices to cut-in broadcasting, start playback at a random location and jump to another location, the video service provider has to implement the random access functionality. Furthermore, the random access points can also enable the user devices to refresh the decoding process in error-resilient transmission. In this paper, we propose a novel transocding algorithm to insert random access points in the pre-encoded scalable video streams. Experiments show that the proposed algorithm can get 0.5-2.1dB PSNR gain over full decode and recode (FDR) transcoder and get 0.8-4dB PSNR gain over cascade pixel domain transcoder(CPDT). Simulation results also display that the proposed transcoding algorithm can reduce the computational complexity significant compared with FDR and CPDT method.

Journal Article
TL;DR: This application is designed for automatic measurements of orthopedic parameters, and allows the possibility of human intervention in case the parameters have not been detected properly, and the segment is Hip Arthroplasty.
Abstract: Computers have become indispensable in all domains, and the medical segment does not represent an exception The need for accuracy and speed has led to a tight collaboration between machines and human beings Maybe the future will allow the existence of a world where the human intervention won't be necessary, but for now, the best approach in the medical field is to create semiautomatic applications, in order to help the doctors with the diagnoses, with following the patients' evolution and managing them and with other medical activities Our application is designed for automatic measurements of orthopedic parameters, and allows the possibility of human intervention in case the parameters have not been detected properly The segment of the application is Hip Arthroplasty

Journal Article
Tai-hoon Kim1
TL;DR: This paper presents an anti-collision protocol existing and applied in the RFID dilemma, sited vulnerabilities and suggested general security solutions, which is the main focus of this paper.
Abstract: To uniquely identify physical objects, Radio Frequency Identification (RFID) systems are used with its limitless possibilities and low cost. RFID is a method of remotely storing and retrieving data using devices called RFID tags. An RFID tag is a small object, such as an adhesive sticker, that can be attached to or incorporated into a product. But with this common scenario involving numerous tags and present in the interrogation zone of a single reader at the same time. RFID is prone to security threat as well, which is the main focus of this paper. In this paper we present an anti-collision protocol existing and applied in the RFID dilemma, sited vulnerabilities and suggested general security solutions.

Journal Article
TL;DR: A new blind equalization technique is proposed, the Exponential Weighted Step-size Recursive Cross Correlation CMA (EXP-RCC-CMA), which is based upon the Exponentially Weighted step- size Recursive Least Squares (exp-RLS) and the Recurrent Cross Correlations (RCC) methods, by introducing several assumptions to obtain higher convergence rate, minimum Mean Squared Error (MSE), and hence better receiver performance in digital system.
Abstract: Equalization plays an important role for the communication system receiver to correctly recover the symbol send by the transmitter, where the received signals may contain additive noise and intersymbol interference (ISI). Blind equalization is a technique of many equalization techniques at which the transmitted symbols over a communication channel can be recovered without the aid of training sequences, recently blind equalizers have a wide range of research interest since they do not require training sequence and extra bandwidth, but the main weaknesses of these approaches are their high computational complexity and slow adaptation, so different algorithms are presented to avoid this nature. The conventional Cross Correlation Constant Modulus Algorithm (CC-CMA) suffers from slow convergence rate corresponds to various transmission delays especially in wireless communication systems, which require higher speed and lower bandwidth. To overcome that, several adaptive algorithms with rapid convergence property are proposed based upon the cross-correlation and constant modulus (CC-CM) criterion, namely the recursive least squares (RLS) version of the CC-CMA (RLS-CC-CMA). This paper proposes a new blind equalization technique, the Exponential Weighted Step-size Recursive Cross Correlation CMA (EXP-RCC-CMA), which is based upon the Exponentially Weighted Step-size Recursive Least Squares (EXP-RLS) and the Recursive Cross Correlation CMA (RCC-CMA) methods, by introducing several assumptions to obtain higher convergence rate, minimum Mean Squared Error (MSE), and hence better receiver performance in digital system. Simulations studies show the rate of convergence, the mean square error (MSE), and the average error versus different signal-to-noise ratios (SNRs) with the other related blind algorithms.

Journal Article
TL;DR: The aim of the paper is to validate architectures that allow an image processing researcher to develop parallel applications and to develop algorithms that perform real-time low level operations on digital images able to be executed on a cluster of desktop PCs.
Abstract: The aim of the paper is to validate architectures that allow an image processing researcher to develop parallel applications. A comparative analysis of the possible software and hardware solutions for real-time image and video processing was presented, with emphasis on distributed computing. The challenge was to develop algorithms that perform real-time low level operations on digital images able to be executed on a cluster of desktop PCs. The experiments on a case study show how to use parallelizable patterns and how to optimize the load balancing between the workstations.

Journal Article
TL;DR: The Pisarenko Harmonic Decomposition is used for the design of 2-D (Two-Dimensional) notch filters and an appropriate transformation, recently proposed by the author is used.
Abstract: In this paper, the Pisarenko Harmonic Decomposition is used for the design of 2-D (Two-Dimensional) notch filters. An appropriate transformation, recently proposed by the author is used.

Journal Article
TL;DR: Three principles of purely non-invasive (without inflatable cuff) measurement are described together with some experimental results and three considered way is based on the pulse waveform analysis starting from the hypothesis that the shape of the wave depends on the value of blood pressure.
Abstract: The purpose of the article is to describe possible ways of non-invasive measuring and analysis of blood circulation parameters. These parameters are very important indicators of various cardiovascular diseases in clinical practice. For this purpose, methods of purely non-invasive analysis are sought. A standard approach is to use an inflatable cuff for blood pressure monitoring and analysis, using different methods of measuring, typically systolic, diastolic, mean and, less commonly, continuous blood pressure values. Inflatable cuff is a device decrementing a patient's comfort, namely by long-time (24 and more hours) monitoring. In this article, three principles of purely non-invasive (without inflatable cuff) measurement are described together with some experimental results. The first method described is based on a reliable detection of artery sectional area in the video sequence of B-mode ultrasound images using the Lucas-Canade optical flow determination technique. The output of this method is a cardiac cycle curve evoked by artery diameter changes. The second method for indirect representation of blood pressure parameters is based on measuring the pulse wave velocity, using the R-wave of electrocardiogram (ECG) as the reference signal and the photoplethysmographic sensor for the acquisition of the pulse wave at some distance from the heard (e.g. at forefinger). The third considered way is based on the pulse waveform analysis starting from the hypothesis that the shape of the wave depends on the value of blood pressure.

Journal Article
TL;DR: In this paper, the wavelet-based performance analysis of the safety barrier for use in a full-scale test is presented, which involves a vehicle, a Ford Fiesta, which strikes the barrier at a prescribed angle and speed.
Abstract: Nowadays, each newly produced car must conform to the appropriate safety standards and norms. The most direct way to observe how a car behaves during a collision and to assess its crashworthiness is to perform a crash test. This paper deals with the wavelet-based performance analysis of the safety barrier for use in a full-scale test. The test involves a vehicle, a Ford Fiesta, which strikes the safety barrier at a prescribed angle and speed. The vehicle speed before the collision was measured. Vehicle accelerations in three directions at the centre of gravity were measured during the collision. The yaw rate was measured with a gyro meter. Using normal speed and high-speed video cameras, the behavior of the safety barrier and the test vehicle during the collision was recorded. Based upon the results obtained, the tested safety barrier, has proved to satisfy the requirements for an impact severity level. By taking into account the Haar wavelets, the property of integral operational matrix is utilized to find an algebraic representation form for calculate of wavelet coefficients of acceleration signals. It is shown that Haar wavelets can construct the acceleration signals well.