scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Multimedia in 2013"


Journal Article
TL;DR: This work compares Simulated Annealing and Genetic Algorithm by simulations for node placement problem to find the optimal distribution of router nodes in order to provide the best network connectivity and user coverage in a set of randomly distributed clients.
Abstract: One of the key advantages of Wireless Mesh Networks (WMNs) is their importance for providing cost-efficient broadband connectivity. There are issues for achieving the network connectivity and user coverage, which are related with the node placement problem. In this work, we compare Simulated Annealing (SA) and Genetic Algorithm (GA) by simulations for node placement problem. We want to find the optimal distribution of router nodes in order to provide the best network connectivity and user coverage in a set of randomly distributed clients. From the simulation results, both algorithms converge to the maximum size of GC. However, according to the number of covered mesh clients SA converges faster.

59 citations


Journal ArticleDOI
TL;DR: SVM classification algorithm will be introduced the remote sensing extraction coastline and Fujian Province Landsat7 ETM + image will be a test region to be classified the image and extract the shoreline.
Abstract: In recent years, support vector machine (SVM) has been widely applied in remote sensing image classification, since its experience can also minimize errors and maximize the geometric characteristics of the edge area. In this article SVM classification algorithm will be introduced the remote sensing extraction coastline. Fujian Province Landsat7 ETM + image will be a test region to be classified the image and extract the shoreline. Then based on the coastline formula calculate modified the shoreline in the ArcGIS and completed the extraction of coastline.

40 citations


Journal ArticleDOI
TL;DR: A new medical image fusion algorithm based on nonsubsampled contourlet transform (NSCT) and spiking cortical model (SCM) and the effectiveness of the proposed algorithm is achieved by the comparison with existing fusion methods.
Abstract: In this paper, we present a new medical image fusion algorithm based on nonsubsampled contourlet transform (NSCT) and spiking cortical model (SCM). The flexible multi-resolution, anisotropy, and directional expansion characteristics of NSCT are associated with global coupling and pulse synchronization features of SCM. Considering the human visual system characteristics, two different fusion rules are used to fuse the low and high frequency sub-bands respectively. Firstly, maximum selection rule (MSR) is used to fuse low frequency coefficients. Secondly, spatial frequency (SF) is applied to motivate SCM network rather than using coefficients value directly, and then the time matrix of SCM is set as criteria to select coefficients of high frequency subband. The effectiveness of the proposed algorithm is achieved by the comparison with existing fusion methods.

39 citations


Journal Article
TL;DR: The energy-aware passive replication scheme of a process is evaluated in terms of total power consumption and average execution time and response time of each process in presence of server fault.
Abstract: In information systems, processes requested by clients have to be performed on servers so that not only QoS (quality of service) requirements like response time are satisfied but also the total electric power consumed by servers to perform processes has to be reduced. Furthermore, each process has to be reliably performed in the presence of server faults. In our approach to reliably performing processes, each process is redundantly performed on multiple servers. The more number of servers a process is performed on, the more reliably the process can be performed but the more amount of electric power is consumed by the servers. Hence, it is critical to discuss how to reliably and energy-efficiently perform processes on multiple servers. In this paper, we discuss how to reduce the total electric power consumed by servers in a cluster where each request process is passively replicated on multiple servers. Here, a process is performed on only one primary server while taking checkpoints and sending the checkpoints to secondary servers. If the primary server is faulty, one of the secondary servers takes over the faulty primary server and the process is performed from the check point on the new primary server. We evaluate the energy-aware passive replication scheme of a process in terms of total power consumption and average execution time and response time of each process in presence of server fault.

36 citations


Journal ArticleDOI
TL;DR: Results show that the proposed scheme in this paper is robust against many image attacks, and improvement can be observed when compared to other existing schemes.
Abstract: To protect the copyright of digital image, this paper proposed a combined Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) based watermarking scheme. To embed the watermark, the cover image was decomposed by a 2-level DWT, and the HL2 sub-band coefficient was divided into 4x4 blocks, then the DCT was performed on each of these blocks. The watermark bit was embedded by predefined pattern_0 or pattern_1 on the middle band coefficients of DCT. After watermark insertion, inverse DCT was applied to each of the 4x4 blocks of HL2 sub-band coefficient, and inverse DWT was applied to obtain the watermarked image. For watermark extraction, the watermarked image, which may be attacked by various image attacks, was decomposed with 2-level DWT and DCT similarly as watermark embedding process, then correlation between middle band coefficients of block DCT and the predefined pattern (pattern_0 and pattern_1) was calculated to decide whether a bit 0 or a bit 1 was embedded. Genetic algorithm was used for embedding and extraction parameters optimization. Optimization is to maximize PSNR of the watermarked image and NCC of the extracted watermark. Experiment results show that the proposed scheme in this paper is robust against many image attacks, and improvement can be observed when compared to other existing schemes.

28 citations


Journal ArticleDOI
TL;DR: Simulation results show that the denoising effect of this method has been greatly improved than the traditional hard and soft threshold method, and can be widely used in practical transformer partial discharge signalDenoising.
Abstract: In order to overcome the discontinuance of the hard thresholding function and the defect of seriously slashing singularity in the soft thresholding function, improve the denoising effect and detect the transformer partial discharge signal more accurately, in this paper an improved wavelet threshold denoising method is put forward through analyzing the interference noise of transformer partial discharge signals and studying various wavelet threshold denoising method, especially the wavelet threshold denoising method that overcomes the shortcomings of the hard and soft threshold. Simulation results show that the denoising effect of this method has been greatly improved than the traditional hard and soft threshold method. This method can be widely used in practical transformer partial discharge signal denoising.

24 citations


Journal ArticleDOI
TL;DR: The algorithm which combines corner detection with convex hull algorithm is put forward which can correctly find the four apexes of QR code and achieves good effects of geometric correction and will also significantly increase the recognition rate of seriously distorted QR code images.
Abstract: Since the angular deviation produced when shooting a QR code image by a camera would cause geometric distortion of the QR code image, the traditional algorithm of QR code image correction would produce distortion. Therefore this paper puts forward the algorithm which combines corner detection with convex hull algorithm. Firstly, binaryzation of the collected QR code image with uneven light is obtained by the methods of local threshold and mathematical morphology. Next, the outline of the QR code and the dots on it are found and the distorted image is recovered by perspective collineation, according to the algorithm raised by this paper. Finally, experimental verification is made that the algorithm raised by this paper can correctly find the four apexes of QR code and achieves good effects of geometric correction. It will also significantly increase the recognition rate of seriously distorted QR code images

21 citations


Journal Article
TL;DR: A methodology for automated generation of 3D model of buildings from laser data and integration with thermal images is presented and allows also fusion of thermal data acquired from different cameras and platforms.
Abstract: Thermal efficiency of building is a fundamental aspect in different countries to reach energy consumption reduction. However, even if a great attention is paid to build new "zero-energy" buildings, low attention is paid to retrofit existing ones. A fast analysis of existing buildings with Infrared Thermography (IRT) has proved to be an adequate and efficient technique. Indeed, IRT can be used to determine energy efficiency and to detect defects like thermal bridges and heat losses. However, both surface temperature and geometry are needed for a reliable evaluation of thermal efficiency, where spatial relationships are important to localize thermal defects and quantify the affected surfaces. For this reason, integration between Building Information Models (BIMs) and Infrared Thermography (IRT) can be a powerful tool to combine geometric information with thermal data in the same model. In this paper a methodology for automated generation of 3D model of buildings from laser data and integration with thermal images is presented. The developed methodology allows also fusion of thermal data acquired from different cameras and platforms. In particular, this paper will focus on thermal images acquired by an Unmanned Aerial Vehicle (UAV). The proposed methodology is suitable for fast building inspections aimed at detecting the thermal anomalies in a construction. Its applicability was tested on different buildings demonstrating the performance of the procedure and its valid support in thermal surveys.

21 citations


Journal ArticleDOI
TL;DR: This paper introduces a new texture feature called Tamura, which is usually used in image retrieval algorithms and uses Principal Component Analysis (PCA) method, which can obtain the mainly information of the feature using less dimension features for estimating the crowd density.
Abstract: As we know, feature extraction has an important role in crowd density estimation. In our paper, we introduce a new texture feature called Tamura, which is usually used in image retrieval algorithms. On the other hand, the time consuming is another issue that must be considered, especially for the real-time application of the crowd density estimation. In most methods, multiple features with high dimension such as the gray level co-occurrence matrix (GLCM) are used to construct the input feature vector, which will decrease the performance of the whole method. In order to solve the problem, we use Principal Component Analysis (PCA) method, which can obtain the mainly information of the feature using less dimension features. In the end, we use the Support Vector Machine (SVM) for estimating the crowd density. Experiments demonstrate that our method can generate high accuracy at low computational cost compared with other existing methods

18 citations


Journal Article
TL;DR: This paper investigates the behaviour of OLSR routing protocol for different values of HELLO sending interval and validity time and designs and implements two experimental scenarios and investigates their performance behaviour for different number of hops.
Abstract: In Mobile Ad-hoc Networks (MANETs) the mobile terminals can be used in cooperation with each other, without having to depend on the network infrastructure. Recently, these terminals are low-cost, have high-performance and are mobile. Because the terminals are mobile, the routes change dynamically, so routing algorithms are important for operation of MANETs. In this paper, we investigate the behaviour of OLSR routing protocol for different values of HELLO sending interval and validity time. We conduct real experiments in a MANET testbed. We design and implement two experimental scenarios in our academic environment and investigate their performance behaviour for different number of hops.

17 citations


Journal Article
TL;DR: The results demonstrated that the development of real-time, easy-to-deploy fall detection and activity recognition systems using low-cost sensors is feasible.
Abstract: We present a real-time fall detection and activity recognition system that is inexpensive and can be easily deployed using two Wii Remotes worn on human body. Continuously 3-dimentional data streams are segmented into sliding windows and then pre-processed for removing signal noises and filling missing samples. Features including Mean, Standard deviation, Energy, Entropy, Correlation between acceleration axes extracted from sliding windows are trained the activity models. The trained models are then used for detecting falls and recognizing 13 fine-grained activities including unknown activities in real-time. An experiment on 12 subjects was conducted to rigorously evaluate the system performance. With the recognition rates as high as 95% precision and recall for user dependent isolation training, 91% precision and recall for 10-fold cross validation and as high as 82% precision and recall for leave one subject out evaluations, the results demonstrated that the development of real-time, easy-to-deploy fall detection and activity recognition systems using low-cost sensors is feasible.

Journal ArticleDOI
TL;DR: Experimental results show that this CBIR system based on color and texture features is efficient in image retrieval, and the Euclidean distance for similarity measure is employed as well.
Abstract: Content-Based Image Retrieval (CBIR) is one of the most active hot spots in the current research field of multimedia retrieval. According to the description and extraction of visual content (feature) of the image, CBIR aims to find images that contain specified content (feature) in the image database. In this paper, several key technologies of CBIR, e. g. the extraction of the color and texture features of the image, as well as the similarity measures are investigated. On the basis of the theoretical research, an image retrieval system based on color and texture features is designed. In this system, the Weighted Color Feature based on HSV space is adopted as a color feature vector, four features of the Co-occurrence Matrix, saying Energy, Entropy, Inertia Quadrature and Correlation, are used to construct texture vectors, and the Euclidean distance for similarity measure is employed as well. Experimental results show that this CBIR system is efficient in image retrieval.

Journal ArticleDOI
TL;DR: The improved Gaussian mixture model is faster in model convergence rate in video motion detection, can quickly adapt to the changes of background and greatly decreases the fall-out ratio.
Abstract: As the classical Gaussian mixture model has some problems of not considering it self’s matching degree of Gaussian density functions, model updating and the background in real video motion detection, made improvements on the three aspects. Optimized Gaussian mixture model’s overall architecture and proposed an improved algorithm according to the analysis of the definition and disadvantages of classical Gaussian mixture model. Finally, through detailed experiment, the result showed: the improved Gaussian mixture model is faster in model convergence rate in video motion detection, can quickly adapt to the changes of background and greatly decreases the fall-out ratio

Journal ArticleDOI
TL;DR: The proposed method for classifying Chinese short texts outperforms conventional classification methods based on a latent Dirichlet allocation (LDA) topic model and high-frequency features are extracted from each category as the feature space.
Abstract: Short text differs from traditional documents in its shortness and sparseness. Feature extension can ease the problem of high sparseness in the vector space model, but it inevitably introduces noise. To resolve this problem, this paper proposes a high-frequency feature expansion method based on a latent Dirichlet allocation (LDA) topic model. High-frequency features are extracted from each category as the feature space, using LDA to derive latent topics from the corpus, and topic words are extended to the short text. Extensive experiments are conducted on Chinese short messages and news titles. The proposed method for classifying Chinese short texts outperforms conventional classification methods.

Journal ArticleDOI
TL;DR: An improved defogging algorithm of single image which can defog the foggy images rapidly and high-quality defogging on single image is proposed, which overcomes the disadvantages of traditional algorithm using plenty of time to optimize transmission, and reduces the complexity of the algorithm.
Abstract: Based on dark channel priority, the paper proposes an improved defogging algorithm of single image which can defog the foggy images rapidly. The algorithm in the paper applies the method combining adaptive median filter and bilateral filter to figure out clear dark channel on the edge. And the algorithm is based on the physical model of foggy images to estimate transmission. Compared with the traditional algorithm, the estimated transmission is detailed and clear, and has no need to be optimized, which not only overcomes the disadvantages of traditional algorithm using plenty of time to optimize transmission, but also reduces the complexity of the algorithm. The experimental results indicate that the algorithm realizes rapid and high-quality defogging on single image

Journal ArticleDOI
TL;DR: An improved algorithm of SURF (Speed Up Robust Features) will resolve the problem which is inefficiency of the SURF algorithm in the autonomous landing system of UAV and demonstrates that the UAV can autonomous land without using GPS, or without specific assumptions about the environment.
Abstract: In vision-based autonomous landing system of UAV (Unmanned Aerial Vehicle), the efficiency of object detection and tracking will directly affect the control system. An improved algorithm of SURF (Speed Up Robust Features) will resolve the problem which is inefficiency of the SURF algorithm in the autonomous landing system of UAV. The improved algorithm is composed of three steps: first, detect the region of the target using the Camshift (Continuously Adaptive Mean-SHIFT) algorithm; second, detect feature points in the region of the above acquired using the SURF algorithm; third, do the matching between the template target and the region of target in frame. Results of experiments and theoretical analysis testify the efficiency of the algorithm. We also show the experiments that demonstrate that the UAV can autonomous land without using GPS, or without specific assumptions about the environment.

Journal ArticleDOI
TL;DR: This paper proposes a novel approach to automatically detect near duplicate images based on visual word model using SIFT descriptors to represent image visual content and presents a local feature based image similarity estimating method by computing histogram distance.
Abstract: In recent years, near duplicate image detecting becomes one of the most important problems in image retrieval, and it is widely used in many application fields, such as copyright violations and detecting forged images. Therefore, in this paper, we propose a novel approach to automatically detect near duplicate images based on visual word model. SIFT descriptors are utilized to represent image visual content which is an effective method in computer vision research field to detect local features of images. Afterwards, we cluster the SIFT features of a given image into several clusters by the K-means algorithm. The centroid of each cluster is regarded as a visual word, and all the centroids are used to construct the visual word vocabulary. To reduce the time cost of near duplicate image detecting process, locality sensitive hashing is utilized to map high-dimensional visual features into low-dimensional hash bucket space, and then the image visual features are converted to a histogram. Next, for a pair of images, we present a local feature based image similarity estimating method by computing histogram distance, and then near duplicate images can be detected. Finally, a series of experiments are constructed to make performance evaluation, and related analyses about experimental results are also given

Journal ArticleDOI
TL;DR: The results of the experiment showed that the segmentation accuracy rate could be improved and good edge retention effect could be obtained through using the GLCM texture extraction method based on NSCT domain and multi-feature fusion in the SAR image segmentation.
Abstract: Gray level co-occurrence matrix (GLCM) is an important method to extract the image texture features of synthetic aperture radar (SAR). However, GLCM can only extract the textures under single scale and single direction. A kind of texture feature extraction method combining nonsubsampled contour transformation (NSCT) and GLCM is proposed, so as to achieve the extraction of texture features under multi-scale and multi-direction. We firstly conducted multi-scale and multi-direction decomposition on the SAR images with NSCT, secondly extracted the symbiosis amount with GLCM from the obtained sub-band images, then conducted the correlation analysis for the extracted symbiosis amount to remove the redundant characteristic quantity; and combined it with the gray features to constitute the multi-feature vector. Finally, we made full use of the advantages of the support vector machine in the aspects of small sample database and generalization ability, and completed the division of multi-feature vector space by SVM so as to achieve the SAR image segmentation. The results of the experiment showed that the segmentation accuracy rate could be improved and good edge retention effect could be obtained through using the GLCM texture extraction method based on NSCT domain and multi-feature fusion in the SAR image segmentation.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors used the wavelet multi-level decomposition and wavelet fusion to determine the number of layers of rain (snow) noise, formulates a fusion rule based on rain noise pollution, and makes wavelet fused on specific layer of multiple continuous degraded images for achieving the objective of rain removal.
Abstract: The images acquired by outdoor vision system in the rain or snow have low contrast and are blurred, and it can cause serious degradation. Traditional rain (snow) removal method is restricted with the intensity, so the effect is not ideal. According to the characteristic of vision system acquiring multiple different degraded images in a short time, the paper processes multiple images to realize restoration. Snow and rain have the dynamic characteristic that the direction, intensity and shape of rain and snow are unfixed, which makes it difficult to establish unified physical model in the spatial domain. But analyzing them in the frequency domain doesn’t affected by the dynamic characteristic. From the perspective of frequency domain, the paper uses the method of wavelet multi-level decomposition and wavelet fusion to determine the number of layers of rain (snow) noise, formulates a fusion rule based on rain (snow) noise pollution, and makes wavelet fusion on specific layer of multiple continuous degraded images for achieving the objective of rain (snow) removal. Simulation results indicated that the method in the paper not only has ideal restoration results, but also is not restricted by noise intensity.

Journal ArticleDOI
TL;DR: A new objective evaluation algorithm is proposed by using ERB-scale to take the place of Bark-scale frequency to describe the frequency selectivity of the human ear when frequency at lower domain, which can improve the accuracy of the measurement.
Abstract: On the Quality of Experience (QoE) evaluation of communication system, the quality of speech is an important factor to evaluate the system. Perceptual evaluation of speech quality (PESQ) is a well known objective speech quality assessment method for the voice QoE evaluation. It is proposed by International Telecommunication Union (ITU) and is formed as the ITU-T P.862 Recommendations. PESQ applies Bark-scale frequency to evaluate the Mean Opinion Score (MOS) for speech quality of voice communication system. But through our research, we find that the PESQ algorithm has limitations for evaluating speech quality. In order to change these limitations, this paper proposes a new objective evaluation algorithm by using ERB-scale to take the place of Bark-scale frequency. The ERB-scale is more accurate than Bark-scale to describe the frequency selectivity of the human ear when frequency at lower domain. We call the new algorithm as NPESQ (New Perceptual Evaluation of Speech Quality) whose accuracy is tested in our experiment. Through experimental comparison against PESQ and NPESQ, the results demonstrate the improvement of the NPESQ. Therefore, it can be concluded that the new algorithm can improve the accuracy of the measurement.

Journal ArticleDOI
TL;DR: Simulation experiments show that the proposed soft, hard threshold tradeoff based signal threshold improved deniosing method shows a strong effect of the practical signal denoising, and the obtained reconstructed signal SNR has been tremendously improved than the traditionalDenoising method.
Abstract: Wavelet analysis is a rapidly developing emerging discipline, at present; it has been widely used in practice. Study wavelet’s new theory, new method and new application have important theoretical significance and practical value. Based on the problem of poor performance of using traditional wavelet transformation algorithm in signal denoising, this paper puts forward a improved scheme based on tradeoff between soft and hard threshold, the scheme is established on the basis of traditional soft threshold, hard threshold, and the obtained estimated wavelet coefficients value of this scheme are between soft threshold and hard threshold methods, so call it the tradeoff between soft and hard threshold method. Implementation steps of the improved scheme based on soft and hard threshold tradeoff are, firstly establishing wavelet coefficient estimator of soft and hard threshold tradeoff method, and adding factor in threshold estimator, so as to adjust the size of the estimated wavelet coefficients. Simulation experiments show that the proposed soft, hard threshold tradeoff based signal threshold improved deniosing method shows a strong effect of the practical signal denoising, the obtained reconstructed signal SNR has been tremendously improved than the traditional denoising method.

Journal ArticleDOI
TL;DR: An optimizing motion detection algorithm is introduced at overcoming the flaw of conventional background subtraction algorithm by combining adaptive background model in HSV color space with moving object segmentation based on fuzzy clustering to extract moving objects from frame.
Abstract: In this paper, for the modern intelligent video surveillance, we introduce an optimizing motion detection algorithm aim at overcoming the flaw of conventional background subtraction algorithm. We combine adaptive background model in HSV color space with moving object segmentation based on fuzzy clustering to extract moving objects from frame. The adaptive background model is able to restoring the background due to the accurate description of the HSV color space, and then the moving object segmentation based on fuzzy clustering is used to distinguish the moving area and noise area by the adaptive selection of threshold. We also consider SIFT feature to improve the performance of motion detection algorithm. The experiment show that the algorithm alleviates the impairment of noise and time complexity of the motion detection algorithm is low.

Journal ArticleDOI
TL;DR: Experimental results show I CA is a real effective facial expression recognition method and the recognition rate based on ICA is greater than based on PCA and 2DPCA.
Abstract: As an important part of artificial intelligence and pattern recognition, facial expression recognition has drawn much attention recently and numerous methods have been proposed. Feature extraction is the most important part which directly affects the final recognition results. Independent component analysis (ICA) is a subspace analysis method, which is also a novel statistical technique in signal processing and machine learning that aims at finding linear projections of the data that maximize their mutual independence. In this paper, we introduce the basic theory of ICA algorithm in detail and then present the process of facial expression recognition based on ICA model. Finally, we use PCA and ICA algorithm to extract facial features, and then SVM classifier is used for facial expression recognition. Experimental results show ICA is a real effective facial expression recognition method and the recognition rate based on ICA is greater than based on PCA and 2DPCA

Journal ArticleDOI
TL;DR: The Soldier Lower Extremity Exoskeleton is a wearable intelligent device that integrates human body with exoskeleton in order to improve the soldier's ability of load- carrying.
Abstract: The Soldier Lower Extremity Exoskeleton (SLEE) is designed based on the biological characteristics of the soldier movement. It is a wearable intelligent device that integrates human body with exoskeleton in order to improve the soldier's ability of load-carrying. The virtual prototype model in the UG was built, the FEA model was built, and the strength is studied of the SLEE's key parts by finite element simulation with the finite element analysis software. The control system is analyzed, gait stability control is realized using ZMP technology. The main control machine of control system comprise embedded processor and high speed digital signal processor (DSP), cooperative motion control of the actuators is realized through the CAN technology. Current research work will be helpful in further this kind of designers to select material and the frame optimization design.

Journal ArticleDOI
TL;DR: An image classification optimization algorithm based on support vector machine (SVM) has the very high accuracy and the experimental results show that the proposed method has thevery high accuracy.
Abstract: With the rapid development of computer technology and multimedia technology, there are a large number of images data in our daily life. If we cannot effectively manage the images, a lot of image information will be lost. As a result, people can't timely and quickly retrieve the needed image data. At present, for the image classification optimization algorithm it mainly includes neural network, Bayesian and Fuzzy sets, etc. But these algorithms have high training complexity, low convergence speed, etc. In view of this, this paper proposed an image classification optimization algorithm based on support vector machine (SVM). When does the image classification, this study followed the following steps (1) Select the proper kernel function. (2) Determine the parameters of the kernel function through the grid search method. (3) Give the feature extraction for the image based on color and texture which will be as the input to achieve the image classification. The experimental results show that the proposed method in image classification optimization has the very high accuracy. Keywords: Support Vector Machine (SVM); image classification; feature extraction; grid research

Journal ArticleDOI
TL;DR: A novel Gradient Vector Field (GVF) Snake is proposed by improving edge map of GVF Snake (PGVF) to enhance the capacity of the active contour model of gradient vector field to locate the boundary of nuclei and cytoplasmic.
Abstract: In order to segment nucleus and cytoplasm of single cervical cell in the semear images accurately, a novel Gradient Vector Field (GVF) Snake is proposed by improving edge map of GVF Snake (PGVF) to enhance the capacity of the active contour model of gradient vector field to locate the boundary of nuclei and cytoplasmic. In our method, the image is converted into polar coordinates and the Sobel operator is used to calculate horizontal boundary, and then Sand Inhibition Method is designed to inhibite the influence of interference elements on boundary location, and finally cervical cells are segmented with GVF. At the end, experiments performed on the Herlev dataset, which contains 917 images show it is more efficiency of the proposed algorithm than RGVF and nearly same accuracy.

Journal ArticleDOI
TL;DR: The research results show that the overall goodness-of-fit indices indicate a reasonable fit of the model and data and can be a reference for the Air Force logistics sectors to make policies as a basis for evaluation of success and acceptance of military-related information system.
Abstract: The Depot-Logistic Information Management System (DLIMS) is aimed to provide the Air Force logistics staff of all sectors with an information technology platform for convenience of management, supervision, aircraft maintenance, along with supply operations. The DLIMS in Taiwan has been established for more than 10 years. However, it has not been evaluated for its effectiveness. This study integrates service quality into Wixom and Todd [1] model to evaluate the effectiveness of the DLIMS. A survey of 273 users of DLIMS was conducted to validate the proposed model. The research results show that the overall goodness-of-fit indices indicate a reasonable fit of the model and data. This study emphasizes the importance of integration of satisfaction theory and technology acceptance theory to comprehensively illustrate the effectiveness of DLIMS. The integrative viewpoint implies that DLIMS is not only a logistic information system but also a service provider to the Air Force. Accordingly, the professional staff officers IS department, logistics sectors and headquarters should value simultaneously object-based functions and service quality, in order to improve users’ satisfaction, which in turn can promote users’ behavioral beliefs and usage intention. The findings can be a reference for the Air Force logistics sectors to make policies as well as be a basis for evaluation of success and acceptance of military-related information system.

Journal ArticleDOI
TL;DR: The main innovations of this paper lie in that, in the improved PCA, a radial basis function is utilized to construct a kernel matrix by computing the distance of two different vectors, which is calculated by the parameter of 2-norm exponential.
Abstract: This paper aims to effectively recognize human faces from images, which is an important problem in the multimedia information process. After analyzing the related research works, the framework of the face recognition system is illustrated as first, which contains the training process and the testing process. Particularly, the improved PCA algorithm is use in the feature extraction module. The main innovations of this paper lie in that, in the improved PCA, we utilize a radial basis function to construct a kernel matrix by computing the distance of two different vectors, which are calculated by the parameter of 2-norm exponential. Afterwards, human faces can be recognized by computing the distance of test image and the training images by the nearest neighbor classifier, of which the cosine distance is utilized. Finally, experiments are conducted to make performance evaluation. Compared with the existing face recognition methods, the proposed scheme is more effective in recognizing human faces with high efficiency.

Journal ArticleDOI
TL;DR: This paper presents the robust watermarking algorithm for medical images based on Arnold transformation and DCT, which solves security and speed problems in the watermark embedding and extracting.
Abstract: Targeting at the incessant securities problems of digital information management system in modern medical system, this paper presents the robust watermarking algorithm for medical images based on Arnold transformation and DCT. The algorithm first deploys the scrambling technology to encrypt the watermark information and then combines it with the visual feature vector of the image to generate a binary logic series through the hash function. The sequence as taken as keys and stored in the third party to obtain ownership of the original image. Having no need for artificial selection of a region of interest, no capacity constraint, no participation of the original medical image, such kind of watermark extracting solves security and speed problems in the watermark embedding and extracting. The simulation results also show that the algorithm is simple in operation and excellent in robustness and invisibility. In a word, it is more practical compared with other algorithms

Journal ArticleDOI
TL;DR: Experimental results show that the algorithm can extract facial features rapidly and have higher accuracy than conventional image searching methods.
Abstract: Active shape model (ASM) is an image searching method based on statistical model, which is widely used to extract features. The classical ASM model includes the shape model and the local profile model. The main merit of ASM model is using statistics to contribute model of specific target image, by introducing prior knowledge on awaiting extraction target object, limiting the searching result in the variable range. These characteristics make it suitable for any similar object’s feature. Experimental results show that the algorithm can extract facial features rapidly and have higher accuracy