scispace - formally typeset
Search or ask a question

Showing papers presented at "Intelligent Systems Design and Applications in 2008"


Proceedings ArticleDOI
26 Nov 2008
TL;DR: Cooperative Collision Warning (CCW), which provides an active safety mechanism for vehicles on highways, is implemented by exchanging static and dynamic vehicle parameters with neighboring vehicles through inter-vehicle wireless communications.
Abstract: Highway safety is always an important issue for automobile industry, so many researches have been conducted to prevent from or reduce the accidents. Cooperative Collision Warning (CCW), which provides an active safety mechanism for vehicles on highways, is implemented by exchanging static and dynamic vehicle parameters with neighboring vehicles through inter-vehicle wireless communications. Received information is not only used for calculating the relative safety distance between neighboring vehicles, but also stored in a Motor Vehicle Event Data Recorder (MVEDR) for future accident investigation. A CCW-based MVEDR can easily rebuild the trajectory and interaction between the host and neighboring vehicles after an accident. This device saves time, labor and cost required for accident analysis and reconstruction.

90 citations


Proceedings ArticleDOI
26 Nov 2008
TL;DR: Experimental results show that the shuffled frog leaping algorithm is efficient for small-scale TSP, particularly for TSP with 51 cities, where the algorithm manages to find six tours which are shorter than the optimal tour provided by TSPLIB.
Abstract: Shuffled frog-leaping algorithm (SFLA) is a new memetic meta-heuristic algorithm with efficient mathematical function and global search capability. Traveling salesman problem (TSP) is a complex combinatorial optimization problem, which is typically used as benchmark for testing the effectiveness as well as the efficiency of a newly proposed optimization algorithm. When applying the shuffled frog-leaping algorithm in TSP, memeplex and submemeplex are built and the evolution of the algorithm, especially the local exploration in submemeplex is carefully adapted based on the prototype SFLA. Experimental results show that the shuffled frog leaping algorithm is efficient for small-scale TSP. Particularly for TSP with 51 cities, the algorithm manages to find six tours which are shorter than the optimal tour provided by TSPLIB. The shortest tour length is 428.87 instead of 429.98 which can be found cited elsewhere.

75 citations


Proceedings ArticleDOI
26 Nov 2008
TL;DR: This paper gives an overview of research achievements in DNA computing and touches on the achievements of improved methods employed inDNA computing as well as in solving application problems, and addresses several challenges that DNA computing faces in the society.
Abstract: The aim of this manuscript is to illustrate the current state of the art of DNA computing achievements, especially of new approaches or methods contributing to solve either theoretical or application problems. Starting with the NP-problem that Adleman solved by means of wet DNA experiment in 1994, DNA becomes one of appropriate alternatives to overcome the silicon computer limitation. Today, many researchers all over the world concentrate on subjects either to improve available methods used in DNA computing or to suggest a new way to solve engineering or application problems with a DNA computing approach. This paper gives an overview of research achievements in DNA computing and touches on the achievements of improved methods employed in DNA computing as well as in solving application problems. At the end of discussion we address several challenges that DNA computing faces in the society.

64 citations


Proceedings ArticleDOI
26 Nov 2008
TL;DR: Experimental results show that the proposed bus-passenger counting algorithm can provide a high count accuracy of 92% on average.
Abstract: This paper presents an automatic people counting system for getting in/out of a bus based on video processing. The basic scheme is to set a zenithal camera in the bus for capturing the passenger flow bi-directionally. The captured frame is firstly divided into many blocks and each block will be classified according to its motion vector. If the block quantity of similar motion vectors is more than a threshold, those blocks are regarded as belonging to the same moving object. As a result, the number of such moving objects is counted to be the passenger number of getting in or out of a bus. can be segmented for counting. Experimental results show that the proposed bus-passenger counting algorithm can provide a high count accuracy of 92% on average.

56 citations


Proceedings ArticleDOI
26 Nov 2008
TL;DR: The authors present the new scheme which encrypts two secret images into two random grids without any pixel expansion and decrypts the original secrets by directly stacking tworandom grids in an additional way of rotating one random grid at 90, 180 or 270 degrees.
Abstract: It is the well-known visual secret sharing (VSS) technique that encrypts a secret image into several share images and, later, decrypts the secret by stacking the share images and recognizing by the human visual system. Due to the perfect secrecy, VSS is one of well-candidate for achieving secure e-commerce. Furthermore, the other visual secret sharing technique is constructed by random grids. The main advantages of VSS by adopting random grids compared with VC include no pixel expansion, and no cost of sophisticated codebook design. In this paper, the authors present the new scheme which encrypts two secret images into two random grids without any pixel expansion and, later, decrypts the original secrets by directly stacking two random grids in an additional way of rotating one random grid at 90, 180 or 270 degrees. The proposed scheme not only has no pixel expansion so that the overhead of storage and communication can be reduced but also raises the capacity of secret communication.

54 citations


Proceedings ArticleDOI
26 Nov 2008
TL;DR: An efficient algorithm, FHSAR, for fast hiding sensitive association rules(SAR), which can completely hide any given SAR by scanning database only once, which significantly reduces the execution time.
Abstract: With rapid advance of the network and data mining techniques, the protection of the confidentiality of sensitive information in a database becomes a critical issue when releasing data to outside parties. Association analysis is a powerful and popular tool for discovering relationships hidden in large data sets. The relationships can be represented in a form of frequent itemsets or association rules. One rule is categorized as sensitive if its disclosure risk is above some given threshold. Privacy-preserving data mining is an important issue which can be applied to various domains, such as Web commerce, crime reconnoitering, health care, and customer's consumption analysis. The main approach to hide sensitive association rules is to reduce the support or the confidence of the rules. This is done by modifying transactions or items in the database. However, the modifications will generate side effects, i.e., nonsensitive rule falsely hidden (i.e., lost rules) and spurious rules falsely generated (i.e., new rules). There is a trade-off between sensitive rules hidden and side effects generated. In this study, we propose an efficient algorithm, FHSAR, for fast hiding sensitive association rules(SAR). The algorithm can completely hide any given SAR by scanning database only once, which significantly reduces the execution time. Experimental results show that FHSAR outperforms previous works in terms of execution time required and side effects generated in most cases.

54 citations


Proceedings ArticleDOI
26 Nov 2008
TL;DR: A method for improving the final accuracy and the convergence speed of Particle Swarm Optimization by adapting its inertia factor in the velocity updating equation and also by adding a new coefficient to the position updating equation is described.
Abstract: This paper describes a method for improving the final accuracy and the convergence speed of Particle Swarm Optimization (PSO) by adapting its inertia factor in the velocity updating equation and also by adding a new coefficient to the position updating equation. These modifications do not impose any serious requirements on the basic algorithm in terms of the number of Function Evaluations (FEs). The new algorithm has been shown to be statistically significantly better than four recent variants of PSO on an eight-function test-suite for the following performance matrices: Quality of the final solution, time to find out the solution, frequency of hitting the optima, and scalability.

49 citations


Proceedings ArticleDOI
26 Nov 2008
TL;DR: An improved background subtraction of moving object detection under fixed camera condition that could quickly establish background model and detect integrity moving target rapidly and effectively distinguishes moving shadow and moving target.
Abstract: This paper puts forward an improved background subtraction of moving object detection under fixed camera condition. First of all, considering the pixels neighboring relativity, the algorithm through the interval pixels establishment of Gaussian mixture model instead of the traditional point by point establishes Gaussian mixture model in background subtraction. Then, combining the adaptive background subtraction with the symmetrical differencing obtains the integrity foreground image. At last, using chromaticity difference to eliminate the shadow of moving target, effectively distinguishes moving shadow and moving target .The results show that the algorithm could quickly establish background model and detect integrity moving target rapidly.

48 citations


Proceedings ArticleDOI
26 Nov 2008
TL;DR: This paper demonstrates how GRA has been successfully used in identifying the significant factors that affect the grain crop yield in China from 1990 to 2003 and how the China's crop yield performance can be further improved by properly adjusting the significant affecting factors.
Abstract: Grey relational analysis (GRA) has been widely applied in analysing multivariate time series data (MTS). It is an alternate solution to the traditional statistical limitations. GRA is employed to search for grey relational grade (GRG) which can be used to describe the relationships between the data attributes and to determine the important factors that significantly influence some defined objectives. This paper demonstrates how GRA has been successfully used in identifying the significant factors that affect the grain crop yield in China from 1990 to 2003. The results from analysing the sample data revealed that the main factor that affects the trend of crop yield is the consumption of pesticide and chemical fertilizer and the least important factor to be considered is the agricultural labour. Thus, by properly adjusting the significant affecting factors, the China's crop yield performance can be further improved. Furthermore, GRA can provide a ranking scheme that gives the order of the grey relationship among the dependent and independent factors which leads to essential information such as which input factor need to be considered to forecast grain crop yield more precisely when using artificial neural network (ANN). In order to evaluate the performance of GRA in ANN model, a comparison is made using multiple linear regression (MR) statistical method. The results from the experiment show that ANN using GRA has outperformed the MR model with 99.0% in forecasting accuracy.

41 citations


Proceedings ArticleDOI
26 Nov 2008
TL;DR: Simulation results show the concave function strategy is an effective manner especially for multi-modal functions in particle swarm optimization.
Abstract: Cognitive and social learning factors are two important parameters of particle swarm optimization (PSO), and many different settings have been proposed, in which one famous strategy is the linear manner proposed by Ratnaweera. However, due to the complex nature of the optimization problems, linear-type setting may not work well in many cases. Since the large cognitive coefficient provides a large local search capability, as well as the small one employs a large global search capability, three different non-linear settings are designed to further investigate the potential advantages among these two parameters. Simulation results show the concave function strategy is an effective manner especially for multi-modal functions.

40 citations


Proceedings ArticleDOI
26 Nov 2008
TL;DR: The proposed scheme utilizes the halftone technique, cover coding table and secret coding table to generate two meaningful shares that will not arouse the attention of hackers.
Abstract: Visual cryptography (VC) schemes hide the secret image into two or more images which are called shares. The secret image can be recovered simply by stacking the shares together without any complex computation involved. The shares are very safe because separately they reveal nothing about the secret image. In this paper, a color visual cryptography scheme producing meaningful shares is proposed. These meaningful shares will not arouse the attention of hackers. The proposed scheme utilizes the halftone technique, cover coding table and secret coding table to generate two meaningful shares. The secret image can be decrypted by stacking the two meaningful shares together. Experimental results have demonstrated that the new scheme is perfectly applicable and achieves a high security level.

Proceedings ArticleDOI
26 Nov 2008
TL;DR: The hierarchical agglomerative clustering is used to cluster users' browsing behaviors and will improve two levels of prediction model to achieve higher hit ratio.
Abstract: Predicting of user's browsing behavior is an important technology of E-commerce application. The prediction results can be used for personalization, building proper Web site, improving marketing strategy, promotion, product supply, getting marketing information, forecasting market trends, and increasing the competitive strength of enterprises etc. In this paper, we use the hierarchical agglomerative clustering to cluster users' browsing behaviors. The prediction results by two levels of prediction model framework work well in general cases. However, two levels of prediction model suffer from the heterogeneity user's behavior. In this paper, we will improve two levels of prediction model to achieve higher hit ratio.

Proceedings ArticleDOI
26 Nov 2008
TL;DR: This work focuses on the configuration of field-programmable gate array (FPGA) to realize the activation function utilized in ANN, and the PWL approximation of AF with more precision is proved to consume lesser Si area when compared to LUT based AF.
Abstract: Implementation of artificial neural network (ANN) in hardware is needed to fully utilize the inherent parallelism. Presented work focuses on the configuration of field-programmable gate array (FPGA) to realize the activation function utilized in ANN. The computation of a nonlinear activation function (AF) is one of the factors that constraint the area or the computation time. The most popular AF is the log-sigmoid function, which has different possibilities of realizing in digital hardware. Equation approximation, lookup table (LUT) based approach and piecewise linear approximation (PWL) are a few to mention. A two-fold approach to optimize the resource requirement is presented here. Primarily, fixed-point computation (FXP) that needs minimal hardware, as against floating-point computation (FLP) is followed. Secondly, the PWL approximation of AF with more precision is proved to consume lesser Si area when compared to LUT based AF. Experimental results are presented for computation.

Proceedings ArticleDOI
26 Nov 2008
TL;DR: This paper proposes a new approach using particle swarm optimization (PSO) for medical image registrations, a stochastic, population-based evolutionary computer algorithm that has been demonstrated for both rigid and non-rigid medical image registration.
Abstract: In image guided surgery, the registration of pre- and intra-operative image data is an important issue. In registrations, we seek an estimate of the transformation that registers the reference image and test image by optimizing their metric function (similarity measure). To date, local optimization techniques, such as the gradient decent method, are frequently used for medical image registrations. But these methods need good initial values for estimation in order to avoid the local minimum. In this paper, we propose a new approach using particle swarm optimization (PSO) for medical image registrations. Particle swarm optimization is a stochastic, population-based evolutionary computer algorithm. The effectiveness of PSO has been demonstrated for both rigid and non-rigid medical image registration.

Proceedings ArticleDOI
26 Nov 2008
TL;DR: A novel prediction algorithm called EMD-SVR model is proposed, which significantly outperform conventional AR model and common SVR model in FMS frequency spectrum series prediction.
Abstract: Support vector regression (SVR) is now a well-established method for non-stationary series forecasting, because of its good generalization ability and guaranteeing global minima. However, only using SVR hardly get satisfied accuracy for complicated frequency spectrum prediction in frequency monitor system (FMS) of high frequency radar. Empirical mode decomposition (EMD) is perfectly suitable for nonlinear and non-stationary signal analysis. By using EMD, any complicated signal can be decomposed into several time series that have simpler frequency components and thus are easier and more accuracy to be forecasted. Therefore, in this paper, a novel prediction algorithm called EMD-SVR is proposed. Experiment results illustrate that EMD-SVR model significantly outperform conventional AR model and common SVR model in FMS frequency spectrum series prediction.

Proceedings ArticleDOI
26 Nov 2008
TL;DR: This paper verifies that Cauchy mutation can prevent the dilemma problem of choosing a proper mutation step size and achieve the acceptable performance except for some specific conditions for evolutionary computation.
Abstract: In evolutionary computation, Gaussian and Cauchy mutations are two popular mutation techniques and completely discussed in this study. It is known that Cauchy mutation has the better ability of escaping local optima, and Gaussian mutation is excellent in local convergence. However, there still exist some vague characteristics in the abilities of local escape/convergence for these two mutations. Therefore, four closed-form equations of probability to clarify those vague behaviors of Gaussian and Cauchy mutations are derived in this paper, and successfully apply to explain the simulated results of benchmark functions. Finally, this paper verifies that Cauchy mutation can prevent the dilemma problem of choosing a proper mutation step size and achieve the acceptable performance except for some specific conditions for evolutionary computation.

Proceedings ArticleDOI
26 Nov 2008
TL;DR: A fast updated sequential pattern tree (called FUSP-tree) structure is proposed to make the tree update process become easy and an incremental FUSp-tree maintenance algorithm is also proposed for reducing the execution time in reconstructing the tree.
Abstract: In this paper, we attempt to handle the maintenance of sequential patterns. New transactions may come from both the new customers and old customers. A fast updated sequential pattern tree (called FUSP-tree) structure is proposed to make the tree update process become easy. An incremental FUSP-tree maintenance algorithm is also proposed for reducing the execution time in reconstructing the tree. The proposed approach is expected to achieve a good trade-off between execution time and tree complexity.

Proceedings ArticleDOI
26 Nov 2008
TL;DR: An improved genetic algorithm and an Optimal Pixel Adjustment Process, OPAP, are used to enhance the quality of a Stego-image, a new image disarranging technique that can be transferred safely and embeds 4 bits per pixel.
Abstract: Image hiding techniques embed a secret image into a cover image. The fusion of the two, called a stego- image, fools grabbers who would not be conscious of the differences between the cover image and. the stego- image. A secret image can be transferred safely using this technique. In general, we would disarrange each pixel in the secret image and adjust all of them to form a suitable string of bits that could be embedded. Then these bits are embedded into the cover image in corresponding places, and this image would become a stego-image that hides secret data. This paper suggests a new image disarranging technique. It uses an improved genetic algorithm and an Optimal Pixel Adjustment Process, OPAP, to enhance the quality of a Stego-image. Experimental results show that a stego- image is indistinguishable from the cover-image. The stego-image can embed 4 bits per pixel, and the mean- square error of a stego-image is much lower than results for previous methods.

Proceedings ArticleDOI
26 Nov 2008
TL;DR: This paper proposes a visual secret sharing scheme for color image encryption with robust and progressive decryption abilities, cooperated with data hiding technique which has two functions, one is for encoding some secret content and the other is for embedding some extra information, e.g., permutation key.
Abstract: This paper proposes a visual secret sharing scheme for color image encryption with robust and progressive decryption abilities. Robust decryption means the secret content can be still decrypted from shares even the shares are corrupted with intentional or unintentional alterations. In progressive decryption, three levels quality of secret images can be decrypted with the same shares. A lower visual quality secret content can be revealed by stacking shares. A halftone version of secret can be decrypted by simple XOR shares. Furthermore, a nearly lossless version of the secret content can be reconstructed by a decoding procedure. The method is cooperated with data hiding technique which has two functions, one is for encoding some secret content and the other is for embedding some extra information, e.g., permutation key. Experimental results show the effectiveness of the proposed method.

Proceedings ArticleDOI
26 Nov 2008
TL;DR: A new method based on cosine distance and normalized distance measures that first indexes trademark images database in order to search for trademarks in narrowed limit and reduce time computation and then calculates similarities for features vector to obtain the total similarity between features vector is proposed.
Abstract: Measuring perceptual similarity and defining an appropriate similarity measure between trademark images remain largely unanswered. Most researchers used the Euclidean distance. This measure considers the difference in magnitude, rather than just the correlation of the features. We propose a new method based on cosine distance and normalized distance measures. The cosine distance metric normalizes all features vector to unit length and makes it invariant against relative in-plane scaling transformation of the image content. The normalized distance combines two distances measures such as cosine distance and Euclidean distance which shows more accuracy than one method alone. The proposed measures take into account the integration of global features (invariant moments and eccentricity) and local features (entropy histogram and distance histogram). It first indexes trademark images database (DB) in order to search for trademarks in narrowed limit and reduce time computation and then calculates similarities for features vector to obtain the total similarity between features vector. We have used retrieval efficiency equation in order to test the accuracy of our method. The obtained results showed that cosine distance and the normalization of cosine and Euclidean distance provide a significant improvement over the Euclidean distance.

Proceedings ArticleDOI
26 Nov 2008
TL;DR: The experiment results have proved that the proposed watermarking method may solve theWatermarking problem of new viewpoint video frame generated by DIBR without affecting the quality of the image much.
Abstract: With the development of the theory of new viewpoint generation, a lot of methods such as GBR(geometry-based rendering) and IBR (image-based rending) became hot subjects. However, there isn't any watermarking method for them. In this paper, we propose firstly a watermarking method for the new viewpoint video frame generated by DIBR (depth image-based rendering). For the sake of preserving the watermark info during the generation of the new viewpoint video frame, we choose the blocks in foreground object of original video frame to embed the watermark since pixels in this kind of object are more probably to be preserved in the warping, therefore the watermark could also be preserved well. The experiment results have proved that the proposed watermarking method may solve the watermarking problem of new viewpoint video frame generated by DIBR without affecting the quality of the image much.

Proceedings ArticleDOI
26 Nov 2008
TL;DR: By designing algorithm and experimentations on Pioneer III mobile robot, it can be seen that when the dynamic mobile robot path planning with this method is done, there are no obstacles of any collision.
Abstract: This paper presents an unknown environment robot path planning algorithm. The robot working environments are expressed by grid model; Using digital potential field generated initial path population, and its optimization find the shortest path, and individual evaluation function were processed fitness function both feasible path and unfeasible path fitness function, and then by increasing the deleted and inserted operators to meet the requirement of avoiding obstacles in the path planning. By designing algorithm and experimentations on Pioneer III mobile robot, we can see that when we do the dynamic mobile robot path planning with this method, there are no obstacles of any collision; The planning path is short and smooth curves, to the satisfaction of the effect of planning and convergence rate.

Proceedings ArticleDOI
26 Nov 2008
TL;DR: The design and implementation of zero-copy technology for the Linux (kernel version 2.6.11) operating system is presented, by modifying its network device driver snull.c and improving on the COW (copy-on-write) technology,.
Abstract: Zero-copy has been a hot research topic for a long history, which is an underlying technology to support many applications, including multimedia retrieval, datamining, efficient data transferring, and so on. Zero-copy means during message transmission, there is no data copy among memory segments on any network node. When a message is sent out, the data packets in user application space go through network interface directly and reach outside of the network; and when receiving a message , the same way is used, meaning the data packets are transmitted into user application space directly. In this paper we, present the design and implementation of zero-copy technology for the Linux (kernel version 2.6.11) operating system, by modifying its network device driver snull.c and improving on the COW (copy-on-write) technology,. The main method we used is the combination of MMAP and PROC procedures to implement the test program and the test strategies, and finally we successfully simulated the ARP protocol module with the VHDL language.

Proceedings ArticleDOI
26 Nov 2008
TL;DR: This paper addresses on identifying Tor and Web-Mix traffics, which are two of the most famous anonymity tools, and proposed a traffic identification scheme based on traffic fingerprint extraction and matching.
Abstract: With the wide use of anonymity tools, both blocking and anti-blocking of these tools have become hot topics. And the traffic identifications of the corresponding tools are key issues of both blocking and anti-blocking. In this paper, we address on identifying Tor and Web-Mix traffics, which are two of the most famous anonymity tools. Taking advantage of the typical methods for traffic identification, we proposed a traffic identification scheme based on traffic fingerprint extraction and matching. The fingerprints comprise of the specific strings, packet length and frequency of the packets' sending time. The details of design and implementation of such traffic identification scheme for both Tor and Web-Mix are presented. The feasibility of the proposed scheme is shown by the simulation experiments results.

Proceedings ArticleDOI
26 Nov 2008
TL;DR: Two efficient schemes to enhance the performance of authentication during handover in mobile WiMAX by adopting an efficient shared key-based EAP method and doing the authentication in SA-TEK three-way handshake in PKMv2 process are proposed.
Abstract: Mobile WiMAX is the next generation of broadband wireless network. It allows users to roam over the network under vehicular speeds. However, when a mobile station changes from one base station to another, it should be authenticated again. This may lead to delay in communication, especially for real-time applications, such as VoIP and Pay-TV systems. In this paper, we propose two efficient schemes to enhance the performance of authentication during handover in mobile WiMAX. The first scheme adopts, instead of the standard EAP method used in handover authentication, an efficient shared key-based EAP method. The second one, skips the standard EAP method, does the authentication in SA-TEK three-way handshake in PKMv2 process. In addition, the security proofs of our schemes are provided in this paper.

Proceedings ArticleDOI
26 Nov 2008
TL;DR: Experimental results demonstrate that the typical block matching algorithm using the proposed adaptive template block have better performance than one using the fixed template block in object tracking.
Abstract: This paper proposes an adaptive template block for object tracking in typical block matching algorithms. The bounding box of moving object would have different scales while the object moves from near to far or from far to near. Therefore, the typical block matching algorithms using the fixed template block would obtain inaccurate object tracking and object recognition. The proposed adaptive template block is used in block matching algorithm to obtain accurate and reliable tracking of object with different scales. Experimental results demonstrate that the typical block matching algorithm using proposed adaptive template block have better performance than one using the fixed template block in object tracking.

Proceedings ArticleDOI
26 Nov 2008
TL;DR: Experimental results show that compared with existing intra refresh methods and flexible macroblock ordering, the proposed method can achieve a considerable improvement on both objective peak signal noise ratio (PSNR) and subjective visual quality at the same bit rate and packet loss rate.
Abstract: Intra refresh is well-known as a simple technique for eliminating temporal error propagation in a predictive video coding system. However, for surveillance video systems, most of the existing intra refresh methods do not make good use of the properties of surveillance video. This paper proposes an efficient error recovery scheme using a novel intra refresh based on motion affected region tracking (MTIR). For every frame between two successive intra refresh frames, the motion affected regions are statistic analyzed and the error sensitive ones are intra refreshed in an refreshed frame. To suppress the potential spatial and temporal error propagation, constrained intra prediction is used for the intra refreshed macroblocks, and the reference number of an inter predicted frame which behinds an intra refreshed frame is limited. Experimental results show that compared with existing intra refresh methods and flexible macroblock ordering, the proposed method can achieve a considerable improvement on both objective peak signal noise ratio (PSNR) and subjective visual quality at the same bit rate and packet loss rate.

Proceedings ArticleDOI
26 Nov 2008
TL;DR: A novel annotation scheme based on neural network (NN) is first proposed for characterizing the hidden association between two modalities, i.e. the visual and the textual modalities and it is revealed that the high annotation accuracy can be achieved at image-level.
Abstract: Automatic image annotation (AIA) is an effective technology for improving the image retrieval. In this paper, a novel annotation scheme based on neural network (NN) is first proposed for characterizing the hidden association between two modalities, i.e. the visual and the textual modalities. Furthermore, latent semantic analysis (LSA) is employed to the NN based annotation scheme (noted as LSA-NN) for discovering the latent contextual correlation among the keywords, which is neglected by many previous annotation methods. Instead of region-level as most previous works do, the LSA-NN based annotation scheme is built at image-level to avoid the prior image segmentation. The experimental results reveal that the high annotation accuracy can be achieved at image-level.

Proceedings ArticleDOI
26 Nov 2008
TL;DR: A hybrid Taguchi particle swarm optimization (HTPSO) algorithm for face recognition based on multilayer neural networks as an identification model and the experimental results demonstrated that the proposed HTPSO learning algorithm is better than the other approaches in recognition rates.
Abstract: We present a method of face recognition using facial texture and surface information. We first use Gabor wavelets extracting local features at different scales and orientations by gray facial images, then combine the texture with the surface feature vectors based on principal component analysis (PCA) to obtain feature vectors from gray and facial surface images. We propose a hybrid Taguchi particle swarm optimization (HTPSO) algorithm for face recognition based on multilayer neural networks as an identification model. Experimental results demonstrate the efficiency of our method for different face poses and facial expressions. In addition, our work compared with other proposed approaches such as back-propagation (BP), particle swarm optimization (PSO) and the genetic algorithm (GA). With different data modality the experimental results demonstrated that the proposed HTPSO learning algorithm is better than the other approaches in recognition rates. The texture and shape features can improve the efficiency of face recognition.

Proceedings ArticleDOI
26 Nov 2008
TL;DR: It is shown that all RST-based methods overcome the extractive summarizer and that hybrid methods produce worse summaries, and that Mann and Thompson strong assumption in summarization and RST research area is not helpful in the way previously imagined.
Abstract: Motivated by governmental, commercial and academic interests, automatic text summarization area has experienced an increasing number of researches and products, which led to a countless number of summarization methods. In this paper, we present a comprehensive comparative evaluation of the main automatic text summarization methods based on rhetorical structure theory (RST), claimed to be among the best ones. We also propose new methods and compare our results to an extractive summarizer, which belongs to a summarization paradigm with severe limitations. To the best of our knowledge, most of our results are new in the area and reveal very interesting conclusions. The simplest RST-based method is among the best ones, although all of them present comparable results. We show that all RST-based methods overcome the extractive summarizer and that hybrid methods produce worse summaries. finally, we verify that Mann and Thompson strong assumption in summarization and RST research area is not helpful in the way previously imagined.