scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Pattern Recognition and Artificial Intelligence in 2002"


Journal ArticleDOI
TL;DR: A new feature extraction method by converting a palmprint image from a spatial domain to a frequency domain using Fourier Transform is proposed, which shows that palmprint identification based on feature extraction in the frequency domain is effective in terms of accuracy and efficiency.
Abstract: Palmprint identification refers to searching in a database for the palmprint template, which is from the same palm as a given palmprint input. The identification process involves preprocessing, feature extraction, feature matching and decision-making. As a key step in the process, in this paper, we propose a new feature extraction method by converting a palmprint image from a spatial domain to a frequency domain using Fourier Transform. The features extracted in the frequency domain are used as indexes to the palmprint templates in the database and the searching process for the best match is conducted by a layered fashion. The experimental results show that palmprint identification based on feature extraction in the frequency domain is effective in terms of accuracy and efficiency.

340 citations


Journal ArticleDOI
TL;DR: The experimental results confirmed that the proposed method could effectively protect host image quality and shorten the overall hiding time when it enhanced the security of the secret image.
Abstract: The improving technology and the ubiquity of the Internet have allowed more and more people to transmit data via the Internet. The contents of the transmission can be in the form of words, voices, images, or even computer animation. To protect the contents from interceptors' attention, the image hiding technology thus emerged. Some contents transmitted via the Internet can be confidential data such as highly valued product design blueprints or war plans, so it is important to pay more attention to the security of the transmitted data, or what we called secret image in this paper. The point of this paper is to enhance the security of the secret image without causing too much distortion to the host image and in the meantime to shorten the image hiding process time. For better protection, we adopted encryption process DES. In addition, we used greedy algorithm to shorten hiding process and to protect the host image from being severely distorted. To test our proposed method to see whether it indeed achieved its objective, we used two sets of images in our experiment. The results of the experiments showed, when k = 2, our PSNR is close to that of Wang et al.'s optimal LSB substitution, but is not significantly different from that of simple LSB substitution. However, our method took approximately only 1/7 of the time consumed by Wang et al. When k = 3, our PSNR is significantly higher than that of simple LSB substitution. The experimental results confirmed that our method could effectively protect host image quality and shorten the overall hiding time when it enhanced the security of the secret image.

116 citations


Journal ArticleDOI
TL;DR: This voting method allows us to combine several runs of cluster algorithms resulting in a common partition to tackle the problem of choosing the appropriate clustering method for a data set where the authors have no a priori information about it.
Abstract: In this paper we present a voting scheme for fuzzy cluster algorithms. This voting method allows us to combine several runs of cluster algorithms resulting in a common partition. This helps us to tackle the problem of choosing the appropriate clustering method for a data set where we have no a priori information about it. We mathematically derive the algorithm from theoretical considerations. Experiments show that the voting algorithm finds structurally stable results. Several cluster validity indexes show the improvement of the voting result in comparison to simple fuzzy voting.

116 citations


Journal ArticleDOI
TL;DR: The sound pruning reduces search time to a reasonable amount, and enables exhaustive search for rule pairs, and the normal approximations of the multinomial distributions are employed as the method for evaluating reliability of a rule pair.
Abstract: This paper presents an efficient algorithm for discovering exception rules from a data set without domain-specific information. An exception rule, which is defined as a deviational pattern to a strong rule, exhibits unexpectedness and is sometimes extremely useful. Previous discovery approaches for this type of knowledge can be classified into a directed approach, which obtains exception rules each of which deviates from a set of user-prespecified strong rules, and an undirected approach, which typically discovers a set of rule pairs each of which represents a pair of an exception rule and its corresponding strong rule. It has been pointed out that unexpectedness is often related to interestingness. In this sense, an undirected approach is promising since its discovery outcome is free from human prejudice and thus tends to be highly unexpected. However, this approach is prohibitive due to extra search for strong rules as well as unreliable patterns in the output. In order to circumvent these difficulties we propose a method based on sound pruning and probabilistic estimation. The sound pruning reduces search time to a reasonable amount, and enables exhaustive search for rule pairs. The normal approximations of the multinomial distributions are employed as the method for evaluating reliability of a rule pair. Our method has been validated using two medical data sets under supervision of a physician and two benchmark data sets in the machine learning community.

70 citations


Journal ArticleDOI
TL;DR: This study considers theoretical aspects as well as experiments performed using a face database with a few number of classes (Yale) and also with a large number ofclasses (FERET)
Abstract: Different eigenspace-based approaches have been proposed for the recognition of faces They differ mostly in the kind of projection method being used and in the similarity matching criterion employed The aim of this paper is to present a comparative study between some of these different approaches This study considers theoretical aspects as well as experiments performed using a face database with a few number of classes (Yale) and also with a large number of classes (FERET)

65 citations


Journal ArticleDOI
TL;DR: The Info-Fuzzy Network is presented, a novel information-theoretic method for building stable and comprehensible decision-tree models and is shown empirically to produce more compact and stable models than the "meta-learner" techniques while preserving a reasonable level of predictive accuracy.
Abstract: Decision-tree algorithms are known to be unstable: small variations in the training set can result in different trees and different predictions for the same validation examples. Both accuracy and stability can be improved by learning multiple models from bootstrap samples of training data, but the "meta-learner" approach makes the extracted knowledge hardly interpretable. In the following paper, we present the Info-Fuzzy Network (IFN), a novel information-theoretic method for building stable and comprehensible decision-tree models. The stability of the IFN algorithm is ensured by restricting the tree structure to using the same feature for all nodes of the same tree level and by the built-in statistical significance tests. The IFN method is shown empirically to produce more compact and stable models than the "meta-learner" techniques, while preserving a reasonable level of predictive accuracy.

62 citations


Journal ArticleDOI
TL;DR: A modified Topology Adaptive Self-Organizing Neural Network is proposed to extract a vector skeleton from a binary numeral image to prune artifacts, if any, in such a skeletal shape.
Abstract: This paper proposes a novel approach to automatic recognition of handprinted Bangla (an Indian script) numerals. A modified Topology Adaptive Self-Organizing Neural Network is proposed to extract a vector skeleton from a binary numeral image. Simple heuristics are considered to prune artifacts, if any, in such a skeletal shape. Certain topological and structural features like loops, junctions, positions of terminal nodes, etc. are used along with a hierarchical tree classifier to classify handwritten numerals into smaller subgroups. Multilayer perceptron (MLP) networks are then employed to uniquely classify the numerals belonging to each subgroup. The system is trained using a sample data set of 1800 numerals and we have obtained 93.26% correct recognition rate and 1.71% rejection on a separate test set of another 7760 samples. In addition, a validation set consisting of 1440 samples has been used to determine the termination of the training algorithm of the MLP networks. The proposed scheme is sufficiently robust with respect to considerable object noise.

60 citations


Journal ArticleDOI
TL;DR: Two algorithms based on incremental and hierarchical clustering, respectively, are proposed, which are parameterized by a graph matching method and shown to be effective for clustering a set of AGs and synthesizing the FDGs that describe the classes.
Abstract: Function-Described Graphs (FDGs) have been introduced by the authors as a representation of an ensemble of Attributed Graphs (AGs) for structural pattern recognition alternative to first-order random graphs. Both optimal and approximate algorithms for error-tolerant graph matching, which use a distance measure between AGs and FDGs, have been reported elsewhere. In this paper, both the supervised and the unsupervised synthesis of FDGs from a set of graphs is addressed. First, two procedures are described to synthesize an FDG from a set of commonly labeled AGs or FDGs, respectively. Then, the unsupervised synthesis of FDGs is studied in he context of clustering a set of AGs and obtaining an FDG model for each cluster. Two algorithms based on incremental and hierarchical clustering, respectively, are proposed, which are parameterized by a graph matching method. Some experimental results both on synthetic data and a real 3D-object recognition application show that the proposed algorithms are effective for clustering a set of AGs and synthesizing the FDGs that describe the classes. Moreover, the synthesized FDGs are shown to be useful for pattern recognition thanks to the distance measure and matching algorithm previously reported.

51 citations


Journal ArticleDOI
TL;DR: The combination of landmark strength validation and Kalman filtering for map updating and robot position estimation allows for robust learning of moderately dynamic indoor environments.
Abstract: A system that builds and maintains a dynamic map for a mobile robot is presented. A learning rule associated to each observed landmark is used to compute its robustness. The position of the robot during map construction is estimated by combining sensor readings, motion commands, and the current map state by means of an Extended Kalman Filter. The combination of landmark strength validation and Kalman filtering for map updating and robot position estimation allows for robust learning of moderately dynamic indoor environments.

50 citations


Journal ArticleDOI
TL;DR: This paper shows that approximate techniques are good alternatives to optimal methods, inspired on the quadratic-time suboptimal algorithm proposed by Bunke and Buhler.
Abstract: The problem of cyclic sequence alignment is considered. Most existing optimal methods for comparing cyclic sequences are very time consuming. For applications where these alignments are intensively used, optimal methods are seldom a feasible choice. The alternative to an exact and costly solution is to use a close-to-optimal but cheaper approach. In previous works, we have presented three suboptimal techniques inspired on the quadratic-time suboptimal algorithm proposed by Bunke and Buhler. Do these approximate approaches come sufficiently close to the optimal solution, with a considerable reduction in computing time? Is it thus worthwhile investigating these approximate methods? This paper shows that approximate techniques are good alternatives to optimal methods.

48 citations


Journal ArticleDOI
TL;DR: The use of two types of automatically inferred transducers as the appropriate models for the understanding phase in dialog systems from a Transduction point of view are described.
Abstract: We present an approach for the development of Language Understanding systems from a Transduction point of view. We describe the use of two types of automatically inferred transducers as the appropriate models for the understanding phase in dialog systems.

Journal ArticleDOI
TL;DR: A novel kind of feedforward artificial neural network has been defined whereby effective stock market predictors can be implemented without the need for complex recurrent neural architectures.
Abstract: In this paper, a hybrid approach to stock market forecasting is presented. It entails utilizing a mixture of hybrid experts, each expert embedding a genetic classifier coupled with an artificial neural network. Information retrieved from technical analysis is supplied as input to genetic classifiers, while past stock market prices — together with other relevant data — are used as input to neural networks. In this way it is possible to implement a strategy that resembles the one used by human experts. In particular, genetic classifiers based on technical-analysis domain knowledge are used to identify quasi-stationary regimes within the financial data series, whereas neural networks are designed to perform context-dependent predictions. For this purpose, a novel kind of feedforward artificial neural network has been defined whereby effective stock market predictors can be implemented without the need for complex recurrent neural architectures. Experiments were performed on a major Italian stock market index, also taking into account trading commissions. The results point to the good forecasting capability of the proposed approach, which allowed outperforming the well known buy-and-hold strategy, as well as predictions obtained using recurrent neural networks.

Journal ArticleDOI
TL;DR: This paper presents a real-time face recognition system where no external time-consuming feature extraction method is used, rather the gray-level values of the raw pixels that make up the face pattern are fed directly to the recognizer.
Abstract: This paper presents a real-time face recognition system. For the system to be real time, no external time-consuming feature extraction method is used, rather the gray-level values of the raw pixels that make up the face pattern are fed directly to the recognizer. In order to absorb the resulting high dimensionality of the input space, support vector machines (SVMs), which are known to work well even in high-dimensional space, are used as the face recognizer. Furthermore, a modified form of polynomial kernel (local correlation kernel) is utilized to take account of prior knowledge about facial structures and is used as the alternative feature extractor. Since SVMs were originally developed for two-class classification, their basic scheme is extended for multiface recognition by adopting one-per-class decomposition. In order to make a final classification from several one-per-class SVM outputs, a neural network (NN) is used as the arbitrator. Experiments with ORL database show a recognition rate of 97.9% and speed of 0.22 seconds per face with 40 classes.

Journal ArticleDOI
TL;DR: The proposed scheme can not only be applied to "a color host image hiding a color secret image", but also to " a color host images hiding a gray scale secret image".
Abstract: In the past, most image hiding techniques have been applied only to gray scale images. Now, many valuable images are color images. Thus, it has become important to be able to apply image-hiding techniques to hide color images. In this paper, our proposed scheme can not only be applied to "a color host image hiding a color secret image", but also to "a color host image hiding a gray scale secret image". Our scheme utilizes the rightmost 3, 2 and 3 bits of the R, G, B channels of every pixel in the host image to hide related information from the secret image. Meanwhile, we utilize the leftmost 5, 6, 5 bits of the R, G, B channels of every pixel in the host image and set the remaining bits as zero to generate a palette. We then use the palette to conduct color quantization on the secret image to convert its 24-bit pixels into pixels with 8-bit palette index values. DES encryption is then conducted on the index values before the secret image is embedded into the rightmost 3, 2, 3 bits of the R, G, B channels of every pixel in the host image. The experimental results show that even under the worst case scenario our scheme guarantees an average host image PSNR value of 39.184 and an average PSNR value of 27.3415 for the retrieved secret image. In addition to the guarantee of the quality of host images and retrieved secret images, our scheme further strengthens the protection of the secret image by conducting color quantization and DES encryption on the secret image in advance. Therefore, our scheme not only expands the application area of image hiding, but is also practical and secure.

Journal ArticleDOI
TL;DR: This paper presents a combination of stochastic context-free grammars (SCFG) and bigram models, used to represent the long-term relations of the structured part of RNA sequences, while thebigram models are used to capture the local Relations of the nonstructured part.
Abstract: The RNA sentences present structured regions caused by pairwise correlations, and nonstructured regions where any global relation can be found. In this paper, we present a combination of stochastic context-free grammars (SCFG) and bigram models. The SCFGs are used to represent the long-term relations of the structured part of RNA sequences, while the bigram models are used to capture the local relations of the nonstructured part. A stochastic version of Sakakibara's algorithm is used to study the SCFGs. Finally, experiments to evaluate the behavior of this proposal were carried out.

Journal ArticleDOI
Sung-Bae Cho1
TL;DR: Experimental results indicate that backpropagation neural network with Pearson's correlation coefficients produces the best result, 97.1% of recognition rate on the test data.
Abstract: Bioinformatics has recently drawn a lot of attention to efficiently analyze biological genomic information with information technology, especially pattern recognition. In this paper, we attempt to explore extensive features and classifiers through a comparative study of the most promising feature selection methods and machine learning classifiers. The gene information from a patient's marrow expressed by DNA microarray, which is either the acute myeloid leukemia or acute lymphoblastic leukemia, is used to predict the cancer class. Pearson's and Spearman's correlation coefficients, Euclidean distance, cosine coefficient, information gain, mutual information and signal to noise ratio have been used for feature selection. Backpropagation neural network, self-organizing map, structure adaptive self-organizing map, support vector machine, inductive decision tree and k-nearest neighbor have been used for classification. Experimental results indicate that backpropagation neural network with Pearson's correlation coefficients produces the best result, 97.1% of recognition rate on the test data.

Journal ArticleDOI
TL;DR: A novel method to express Chinese characters mathematically is presented based on the knowledge of the structure of Chinese characters, which makes Chinese information processing much simpler than before.
Abstract: In this paper, a novel method to express Chinese characters mathematically is presented based on the knowledge of the structure of Chinese characters. Each Chinese character can be denoted by a mathematical expression in which the operands are components of Chinese characters and the operators are the location relations between the components. Five hundred five components are selected and 6 operators are defined to express all the Chinese characters successfully. These mathematical expressions of Chinese characters are simple, natural, and can be operated like the common mathematical expression of numbers. It makes Chinese information processing much simpler than before. This theory has been applied successfully in fonts automation, Chinese information transmission among different platforms and different operating systems on Internet, and knowledge discovery of the structure of Chinese characters. It can also be applied extensively to many areas such as typesetting, advertising, packing design, virtual library, network transmission, pattern recognition and Chinese mobile communication.

Journal ArticleDOI
TL;DR: A linear model is proposed for solving the problem of targeted marketing by drawing and extending results from information retrieval, which establishes a basis on which further studies and experimental evaluation can be carried out.
Abstract: Targeted marketing typically involves the identification of customers or products having potential market values. We propose a linear model for solving this problem by drawing and extending results from information retrieval. It is assumed that each object is represented by values of a finite set of attributes. A market value function, which is a linear combination of utility functions on attribute values, is used to rank objects. Several methods are examined for mining market value functions. The main advantage of the model is that one can rank objects of interest according to their market values, instead of classifying the objects. Both the theoretical and experimental results are reported in this paper. It establishes a basis on which further studies and experimental evaluation can be carried out.

Journal ArticleDOI
TL;DR: A novel method to obtain point correspondence in pairs of images using robust features such as orientation, width and curvature extracted from those structures based on automatically establishing correspondence between linear structures which appear in images.
Abstract: A novel method to obtain point correspondence in pairs of images is presented. Our approach is based on automatically establishing correspondence between linear structures which appear in images using robust features such as orientation, width and curvature extracted from those structures. The extracted points can be used to register sets of images. The potential of the developed approach is demonstrated on mammographic images.

Journal ArticleDOI
TL;DR: The proposed method for extracting primitives of a regular texture via wavelet transform can get more accurate results for displacement vectors when comparing to traditional co-occurrence matrix based methods.
Abstract: In this paper, a new method for extracting primitives of a regular texture via wavelet transform is provided. Each primitive is restricted to be a parallelogram. The main work of the proposed method is to extract the two displacement vectors that form the primitive of a regular texture. A contrast-based criterion is used to select appropriate subimages obtained from wavelet transform for detecting displacement vectors. An edge thresholding method is then performed on the subimages selected to locate edges. Based on these edges, Hough transform is applied to extract the displacement vectors. The proposed method is quite efficient and can get more accurate results for displacement vectors when comparing to traditional co-occurrence matrix based methods. Synthesized textures are provided to show the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: A weak fuzzy similarity relation is introduced as a more realistic relation in representing the relationship between two elements of data in real-world applications and a more generalized fuzzy rough set approximation of a given fuzzy set is proposed and discussed as an alternative to provide interval-value fuzzy sets.
Abstract: In 1982, Pawlak proposed the concept of rough sets with a practical purpose of representing indiscernibility of elements or objects in the presence of information systems. Even if it is easy to analyze, the rough set theory built on a partition induced by equivalence relation may not provide a realistic view of relationships between elements in real-world applications. Here, coverings of, or nonequivalence relations on, the universe can be considered to represent a more realistic model instead of a partition in which a generalized model of rough sets was proposed. In this paper, first a weak fuzzy similarity relation is introduced as a more realistic relation in representing the relationship between two elements of data in real-world applications. Fuzzy conditional probability relation is considered as a concrete example of the weak fuzzy similarity relation. Coverings of the universe is provided by fuzzy conditional probability relations. Generalized concepts of rough approximations and rough membership functions are proposed and defined based on coverings of the universe. Such generalization is considered as a kind of fuzzy rough set. A more generalized fuzzy rough set approximation of a given fuzzy set is proposed and discussed as an alternative to provide interval-value fuzzy sets. Their properties are examined.

Journal ArticleDOI
TL;DR: A new integrated clustering algorithm GFC is presented that combines gravity-based clusteringgorithm GC with fuzzy clustering and theoretic analysis shows that GFC can converge to a local minimum of the object function.
Abstract: The switching regression problems are attracting more and more attention in a variety of disciplines such as pattern recognition, economics and databases. To solve switching regression problems, many approaches have been investigated. In this paper, we present a new integrated clustering algorithm GFC that combines gravity-based clustering algorithm GC with fuzzy clustering. GC, as a new hard clustering algorithm presented here, is based on the well-known Newton's Gravity Law. Our theoretic analysis shows that GFC can converge to a local minimum of the object function. Our experimental results illustrate that GFC for switching regression problems has better performance than standard fuzzy clustering algorithms, especially in terms of convergence speed. Hence GFC is a new more efficient algorithm for switching regression problems.

Journal ArticleDOI
TL;DR: The evolved CA termed as GMACA (Generalized Multiple Attractor Cellular Automata), acts as an associative memory model for reinforcement learning.
Abstract: This paper reports an efficient technique of evolving Cellular Automata (CA) as an associative memory model. The evolved CA termed as GMACA (Generalized Multiple Attractor Cellular Automata), acts ...

Journal ArticleDOI
TL;DR: This work addresses the problem of smoothing the probability distribution defined by a finite state automaton by interpreting n-gram models as finite state models and improves perplexity over smoothed n-rams and Error Correcting Parsing techniques.
Abstract: We address the problem of smoothing the probability distribution defined by a finite state automaton. Our approach extends the ideas employed for smoothing n-gram models. This extension is obtained by interpreting n-gram models as finite state models. The experiments show that our smoothing improves perplexity over smoothed n-grams and Error Correcting Parsing techniques.

Journal ArticleDOI
TL;DR: This paper provides another feature CM–CVAP, which combines both, to raise the quality of similarity measure and shows that the image retrieval method based on the CM– CVAP feature gives quite an impressive performance.
Abstract: This paper first introduces three simple and effective image features — the color moment (CM), the color variance of adjacent pixels (CVAP) and CM–CVAP. The CM feature delineates the color-spatial information of images, and the CVAP feature describes the color variance of pixels in an image. However, these two features can only characterize the content of images in different ways. This paper hence provides another feature CM–CVAP, which combines both, to raise the quality of similarity measure. The experimental results show that the image retrieval method based on the CM–CVAP feature gives quite an impressive performance.

Journal ArticleDOI
TL;DR: Experiments with various real stereo images prove the superiority of similarity measure for stereo matching based on fuzzy relations over normalized cross correlation (NCC) under nonideal conditions.
Abstract: Stereo matching is the central problem of stereovision paradigm. Area-based techniques provide the dense disparity maps and hence they are preferred for stereo correspondence. Normalized cross correlation (NCC), sum of squared differences (SSD) and sum of absolute differences (SAD) are the linear correlation measures generally used in the area-based techniques for stereo matching. In this paper, similarity measure for stereo matching based on fuzzy relations is used to establish the correspondence in the presence of intensity variations in stereo images. The strength of relationship of fuzzified data of two windows in the left image and the right image of stereo image pair is determined by considering the appropriate fuzzy aggregation operators. However, these measures fail to establish correspondence of the pixels in the stereo images in the presence of occluded pixels in the corresponding windows. Another stereo matching algorithm based on fuzzy relations of fuzzy data is used for stereo matching in such regions of images. This algorithm is based on weighted normalized cross correlation (WNCC) of the intensity data in the left and the right windows of stereo image pair. The properties of the similarity measures used in these algorithms are also discussed. Experiments with various real stereo images prove the superiority of these algorithms over normalized cross correlation (NCC) under nonideal conditions.

Journal ArticleDOI
TL;DR: This work proposes a data-mining approach that produces generalized query patterns (with generalized keywords) from the raw user logs of the Microsoft Encarta search engine that can act as cache of the search engine, improving its performance.
Abstract: We propose a data-mining approach that produces generalized query patterns (with generalized keywords) from the raw user logs of the Microsoft Encarta search engine (http://encarta.msn.com). Those query patterns can act as cache of the search engine, improving its performance. The cache of the generalized query patterns is more advantageous than the cache of the most frequent user queries since our patterns are generalized, covering more queries and future queries even those not previously asked. Our method is unique since query patterns discovered reflect the actual dynamic usage and user feedbacks of the search engine, rather than the syntactic linkage structure of web pages (as Google does). Simulation shows that such generalized query patterns improve search engine’s overall speed considerably. The generalized query patterns, when viewed with a graphical user interface, are also helpful to web editors, who can easily discover topics in which users are mostly interested.

Journal ArticleDOI
TL;DR: The-state-of-the-art of the optimal structure design of Multilayer Feedforward Neural Network (MFNN) for pattern recognition is reviewed and a comprehensively comparative study of the main characteristics of each method is presented.
Abstract: In this survey paper, the-state-of-the-art of the optimal structure design of Multilayer Feedforward Neural Network (MFNN) for pattern recognition is reviewed. Special emphasis is laid on the scale-limited MFNN and the internal representation and decision boundary-based design methodologies. A comprehensively comparative study of the main characteristics of each method is presented. Also, future research directions are outlined.

Journal ArticleDOI
TL;DR: The application to automatic face recognition of a novel supervised Hausdorff-based measure designed to minimize the distance between sets of the same class (subject) and at the same time maximize the distance of sets between different classes.
Abstract: Hausdorff distance is a deformation tolerant measure between two sets of points. The main advantage of this measure is that it does not need an explicit correspondence between the points of the two sets. This paper presents the application to automatic face recognition of a novel supervised Hausdorff-based measure. This measure is designed to minimize the distance between sets of the same class (subject) and at the same time maximize the distance of sets between different classes.

Journal ArticleDOI
TL;DR: A novel multiagent approach to optimization inspired by diffusion in nature called Evolutionary Multiagent Diffusion (EMD), which makes the decision to diffuse based on the information shared between its parent and its siblings is proposed.
Abstract: This article proposes a novel multiagent approach to optimization inspired by diffusion in nature called Evolutionary Multiagent Diffusion (EMD). Each agent in EMD makes the decision to diffuse based on the information shared between its parent and its siblings. The behavior of EMD is analyzed and its relation to similar search algorithms is discussed.