scispace - formally typeset
Search or ask a question

Showing papers in "World Academy of Science, Engineering and Technology, International Journal of Computer, Electrical, Automation, Control and Information Engineering in 2008"


Journal ArticleDOI
TL;DR: In this article, an up-to-date review of major human face recognition research is provided, including a review of the most recent face recognition techniques and their applications, as well as a summary of the research results.
Abstract: The task of face recognition has been actively researched in recent years. This paper provides an up-to-date review of major human face recognition research. We first present an overview of face recognition and its applications. Then, a literature review of the most recent face recognition techniques is presented. Description and limitations of face databases which are used to test the performance of these face recognition algorithms are given. A brief summary of the face recognition vendor test (FRVT) 2002, a large scale evaluation of automatic face recognition technology, and its conclusions are also given. Finally, we give a summary of the research results. Keywords—Combined classifiers, face recognition, graph matching, neural networks.

316 citations


Journal Article
TL;DR: This work proposes a trainable summarizer, which takes into account several features, including sentence position, positive keyword, negative keyword, sentence centrality, sentence resemblance to the title, sentence inclusion of name entity, sentenceclusion of numerical data, sentence relative length, Bushy path of the sentence and aggregated similarity for each sentence to generate summaries.
Abstract: This work proposes an approach to address automatic text summarization. This approach is a trainable summarizer, which takes into account several features, including sentence position, positive keyword, negative keyword, sentence centrality, sentence resemblance to the title, sentence inclusion of name entity, sentence inclusion of numerical data, sentence relative length, Bushy path of the sentence and aggregated similarity for each sentence to generate summaries. First we investigate the effect of each sentence feature on the summarization task. Then we use all features score function to train genetic algorithm (GA) and mathematical regression (MR) models to obtain a suitable combination of feature weights. The proposed approach performance is measured at several compression rates on a data corpus composed of 100 English religious articles. The results of the proposed approach are promising.

260 citations


Journal Article
TL;DR: From the results, it is observed that the permutation of bits is effective in significantly reducing the correlation thereby decreasing the perceptual information, whereas the permutations of pixels and blocks are good at producing higher level security compared to bit permutation.
Abstract: This paper proposes a new approach for image encryption using a combination of different permutation techniques. The main idea behind the present work is that an image can be viewed as an arrangement of bits, pixels and blocks. The intelligible information present in an image is due to the correlations among the bits, pixels and blocks in a given arrangement. This perceivable information can be reduced by decreasing the correlation among the bits, pixels and blocks using certain permutation techniques. This paper presents an approach for a random combination of the aforementioned permutations for image encryption. From the results, it is observed that the permutation of bits is effective in significantly reducing the correlation thereby decreasing the perceptual information, whereas the permutation of pixels and blocks are good at producing higher level security compared to bit permutation. A random combination method employing all the three techniques thus is observed to be useful for tactical security applications, where protection is needed only against a casual observer. Keywords— Encryption, Permutation, Good key, Combinational permutation, Pseudo random index generator.

161 citations


Journal ArticleDOI
TL;DR: A hybrid machine learning system based on Genetic Algorithm and Support Vector Machines for stock market prediction, making use of technical indicators of highly correlated stocks, outperforms the stand alone SVM system.
Abstract: In this paper, we propose a hybrid machine learning system based on Genetic Algorithm (GA) and Support Vector Machines (SVM) for stock market prediction. A variety of indicators from the technical analysis field of study are used as input features. We also make use of the correlation between stock prices of different companies to forecast the price of a stock, making use of technical indicators of highly correlated stocks, not only the stock to be predicted. The genetic algorithm is used to select the set of most informative input features from among all the technical indicators. The results show that the hybrid GA-SVM system outperforms the stand alone SVM system.

153 citations


Journal ArticleDOI
TL;DR: This paper proposes a fast and memory efficient method of live face detection for embedded face recognition system, based on the analysis of the movement of the eyes, which detects eyes in sequential input images and calculates variation of each eye region to determine whether the input face is a real face or not.
Abstract: To increase reliability of face recognition system, the system must be able to distinguish real face from a copy of face such as a photograph. In this paper, we propose a fast and memory efficient method of live face detection for embedded face recognition system, based on the analysis of the movement of the eyes. We detect eyes in sequential input images and calculate variation of each eye region to determine whether the input face is a real face or not. Experimental results show that the proposed approach is competitive and promising for live face detection. Keywords—Liveness Detection, Eye detection, SQI.

144 citations


Journal ArticleDOI
TL;DR: This work presents a study of the behaviour of the topographic error in different kinds of maps and suggests a new topological error to improve the deficiency of theTopographic error.
Abstract: The SOM has several beneficial features which make it a useful method for data mining. One of the most important features is the ability to preserve the topology in the projection. There are several measures that can be used to quantify the goodness of the map in order to obtain the optimal projection, including the average quantization error and many topological errors. Many researches have studied how the topology preservation should be measured. One option consists of using the topographic error which considers the ratio of data vectors for which the first and second best BMUs are not adjacent. In this work we present a study of the behaviour of the topographic error in different kinds of maps. We have found that this error devaluates the rectangular maps and we have studied the reasons why this happens. Finally, we suggest a new topological error to improve the deficiency of the topographic error. Keywords—Map lattice, Self-Organizing Map, topographic error, topology preservation.

141 citations


Journal Article
TL;DR: This work has combined many methods to build a minutia extractor and a minutsia matcher, and some novel changes like segmentation using Morphological operations, improved thinning, false minutiae removal methods, minutIA marking with special considering the triple branch counting, and matching in the unified x-y coordinate system after a two-step transformation are used in the work.
Abstract: Most fingerprint recognition techniques are based on minutiae matching and have been well studied. However, this technology still suffers from problems associated with the handling of poor quality impressions. One problem besetting fingerprint matching is distortion. Distortion changes both geometric position and orientation, and leads to difficulties in establishing a match among multiple impressions acquired from the same finger tip. Marking all the minutiae accurately as well as rejecting false minutiae is another issue still under research. Our work has combined many methods to build a minutia extractor and a minutia matcher. The combination of multiple methods comes from a wide investigation into research papers. Also some novel changes like segmentation using Morphological operations, improved thinning, false minutiae removal methods, minutia marking with special considering the triple branch counting, minutia unification by decomposing a branch into three terminations, and matching in the unified x-y coordinate system after a two-step transformation are used in the work. Keywords—Biometrics, Minutiae, Crossing number, False Accept Rate (FAR), False Reject Rate (FRR).

116 citations


Journal Article
TL;DR: A novel approach for generalized image retrieval based on semantic contents is presented, a combination of three feature extraction methods namely color, texture, and edge histogram descriptor, developed based on greedy strategy.
Abstract: In this paper a novel approach for generalized image retrieval based on semantic contents is presented. A combination of three feature extraction methods namely color, texture, and edge histogram descriptor. There is a provision to add new features in future for better retrieval efficiency. Any combination of these methods, which is more appropriate for the application, can be used for retrieval. This is provided through User Interface (UI) in the form of relevance feedback. The image properties analyzed in this work are by using computer vision and image processing algorithms. For color the histogram of images are computed, for texture cooccurrence matrix based entropy, energy, etc, are calculated and for edge density it is Edge Histogram Descriptor (EHD) that is found. For retrieval of images, a novel idea is developed based on greedy strategy to reduce the computational complexity. The entire system was developed using AForge.Imaging (an open source product), MATLAB .NET Builder, C#, and Oracle 10g. The system was tested with Coral Image database containing 1000 natural images and achieved better results. Keywords—Content Based Image Retrieval (CBIR), Cooccurrence matrix, Feature vector, Edge Histogram Descriptor (EHD), Greedy strategy.

104 citations


Journal Article
TL;DR: This paper surveys well known security issues in WSNs and studies the behavior of WSN nodes that perform public key cryptographic operations and evaluates time and power consumption of public key cryptography algorithm for signature and key management by simulation.
Abstract: With the widespread growth of applications of Wireless Sensor Networks (WSNs), the need for reliable security mechanisms these networks has increased manifold. Many security solutions have been proposed in the domain of WSN so far. These solutions are usually based on well-known cryptographic algorithms. In this paper, we have made an effort to survey well known security issues in WSNs and study the behavior of WSN nodes that perform public key cryptographic operations. We evaluate time and power consumption of public key cryptography algorithm for signature and key management by simulation. Keywords—Wireless Sensor Networks, Security, Public Key Cryptography, Key Management.

97 citations


Journal ArticleDOI
TL;DR: This work develops a correlation-based feature selection algorithm to remove the worthless information from the original high dimensional database and designs an intrusion detection method to solve the problems of uncertainty caused by limited and ambiguous information.
Abstract: The network traffic data provided for the design of intrusion detection always are large with ineffective information and enclose limited and ambiguous information about users’ activities. We study the problems and propose a two phases approach in our intrusion detection design. In the first phase, we develop a correlation-based feature selection algorithm to remove the worthless information from the original high dimensional database. Next, we design an intrusion detection method to solve the problems of uncertainty caused by limited and ambiguous information. In the experiments, we choose six UCI databases and DARPA KDD99 intrusion detection data set as our evaluation tools. Empirical studies indicate that our feature selection algorithm is capable of reducing the size of data set. Our intrusion detection method achieves a better performance than those of participating intrusion detectors. Keywords—Intrusion detection, feature selection, k-nearest neighbors, fuzzy clustering, Dempster-Shafer theory

94 citations


Journal Article
TL;DR: It is concluded that Joo's technique is more robust for standard noise attacks than Dote’s technique and building on the experience that was gained, implemented two distinguishing watermarking schemes.
Abstract: In this paper, we start by first characterizing the most important and distinguishing features of wavelet-based watermarking schemes. We studied the overwhelming amount of algorithms proposed in the literature. Application scenario, copyright protection is considered and building on the experience that was gained, implemented two distinguishing watermarking schemes. Detailed comparison and obtained results are presented and discussed. We concluded that Joo’s [1] technique is more robust for standard noise attacks than Dote’s [2] technique. Keywords—Digital image, Copyright protection, Watermarking, Wavelet transform.

Journal Article
TL;DR: An optimal control of Reverse Osmosis (RO) plant is studied in this paper utilizing the auto tuning concept in conjunction with PID controller and newly designed hybrid random generator composed of Cauchy distribution and linear congruential generator.
Abstract: An optimal control of Reverse Osmosis (RO) plant is studied in this paper utilizing the auto tuning concept in conjunction with PID controller. A control scheme composing an auto tuning stochastic technique based on an improved Genetic Algorithm (GA) is proposed. For better evaluation of the process in GA, objective function defined newly in sense of root mean square error has been used. Also in order to achieve better performance of GA, more pureness and longer period of random number generation in operation are sought. The main improvement is made by replacing the uniform distribution random number generator in conventional GA technique to newly designed hybrid random generator composed of Cauchy distribution and linear congruential generator, which provides independent and different random numbers at each individual steps in Genetic operation. The performance of newly proposed GA tuned controller is compared with those of conventional ones via simulation. Keywords—Genetic Algorithm, Auto tuning, Hybrid random number generator, Reverse Osmosis, PID controller.

Journal Article
TL;DR: This new multivariate fuzzy time series forecasting method is applied in forecasting total number of car accidents in Belgium using four secondary factors and it is shown that this proposed method perform better than existing fuzzy timeseries forecasting methods.
Abstract: In this paper, we have presented a new multivariate fuzzy time series forecasting method. This method assumes mfactors with one main factor of interest. History of past three years is used for making new forecasts. This new method is applied in forecasting total number of car accidents in Belgium using four secondary factors. We also make comparison of our proposed method with existing methods of fuzzy time series forecasting. Experimentally, it is shown that our proposed method perform better than existing fuzzy time series forecasting methods. Practically, actuaries are interested in analysis of the patterns of causalities in road accidents. Thus using fuzzy time series, actuaries can define fuzzy premium and fuzzy underwriting of car insurance and life insurance for car insurance. National Institute of Statistics, Belgium provides region of risk classification for each road. Thus using this risk classification, we can predict premium rate and underwriting of insurance policy holders. Keywords—Average forecasting error rate (AFER), Fuzziness of fuzzy sets Fuzzy, If-Then rules, Multivariate fuzzy time series.

Journal Article
TL;DR: The subtractive clustering algorithm is used to provide the optimal number of clusters needed by FCM algorithm by optimizing the parameters of the subtractive clusters algorithm by an iterative search approach and then to find an optimal weighting exponent (m) for theFCM algorithm.
Abstract: Fuzzy C-means Clustering algorithm (FCM) is a method that is frequently used in pattern recognition. It has the advantage of giving good modeling results in many cases, although, it is not capable of specifying the number of clusters by itself. In FCM algorithm most researchers fix weighting exponent (m) to a conventional value of 2 which might not be the appropriate for all applications. Consequently, the main objective of this paper is to use the subtractive clustering algorithm to provide the optimal number of clusters needed by FCM algorithm by optimizing the parameters of the subtractive clustering algorithm by an iterative search approach and then to find an optimal weighting exponent (m) for the FCM algorithm. In order to get an optimal number of clusters, the iterative search approach is used to find the optimal single-output Sugenotype Fuzzy Inference System (FIS) model by optimizing the parameters of the subtractive clustering algorithm that give minimum least square error between the actual data and the Sugeno fuzzy model. Once the number of clusters is optimized, then two approaches are proposed to optimize the weighting exponent (m) in the FCM algorithm, namely, the iterative search approach and the genetic algorithms. The above mentioned approach is tested on the generated data from the original function and optimal fuzzy models are obtained with minimum error between the real data and the obtained fuzzy models. Keywords—Fuzzy clustering, Fuzzy C-Means, Genetic Algorithm, Sugeno fuzzy systems.

Journal Article
TL;DR: Experimental results demonstrated the effectiveness of the proposed method for face recognition with less misclassification in comparison with previous methods.
Abstract: In this paper, a new face recognition method based on PCA (principal Component Analysis), LDA (Linear Discriminant Analysis) and neural networks is proposed. This method consists of four steps: i) Preprocessing, ii) Dimension reduction using PCA, iii) feature extraction using LDA and iv) classification using neural network. Combination of PCA and LDA is used for improving the capability of LDA when a few samples of images are available and neural classifier is used to reduce number misclassification caused by not-linearly separable classes. The proposed method was tested on Yale face database. Experimental results on this database demonstrated the effectiveness of the proposed method for face recognition with less misclassification in comparison with previous methods.

Journal Article
TL;DR: A new fast technique for skin detection which can be applied in a real time system and which can rapidly detect skin and non-skin color pixels, which in turn dramatically reduce the CPU time required for the protection process.
Abstract: Skin color can provide a useful and robust cue for human-related image analysis, such as face detection, pornographic image filtering, hand detection and tracking, people retrieval in databases and Internet, etc. The major problem of such kinds of skin color detection algorithms is that it is time consuming and hence cannot be applied to a real time system. To overcome this problem, we introduce a new fast technique for skin detection which can be applied in a real time system. In this technique, instead of testing each image pixel to label it as skin or non-skin (as in classic techniques), we skip a set of pixels. The reason of the skipping process is the high probability that neighbors of the skin color pixels are also skin pixels, especially in adult images and vise versa. The proposed method can rapidly detect skin and non-skin color pixels, which in turn dramatically reduce the CPU time required for the protection process. Since many fast detection techniques are based on image resizing, we apply our proposed pixel skipping technique with image resizing to obtain better results. The performance evaluation of the proposed skipping and hybrid techniques in terms of the measured CPU time is presented. Experimental results demonstrate that the proposed methods achieve better result than the relevant classic method. Keywords—Adult images filtering, image resizing, skin color detection, YcbCr color space.

Journal Article
TL;DR: A novel iris recognition system using 1D log polar Gabor wavelet and Euler numbers and the proposed decision strategy uses these features to authenticate an individual's identity while maintaining a low false rejection rate.
Abstract: This paper presents a novel iris recognition system using 1D log polar Gabor wavelet and Euler numbers. 1D log polar Gabor wavelet is used to extract the textural features, and Euler numbers are used to extract topological features of the iris. The proposed decision strategy uses these features to authenticate an individual's identity while maintaining a low false rejection rate. The algorithm was tested on CASIA iris image database and found to perform better than existing approaches with an overall accuracy of 99.93%. MONG the present biometric traits, iris is found to be the most reliable and accurate (1). The use of human iris as a biometric feature offers many advantages over other biometric features. Iris is the internal human body organ that is visible from outside, but well protected from external modifiers. It has epigenetic formation and it is formed from the individual DNA, but a large part of its final pattern is developed at random. Two eyes from the same individual, although are very similar, contain unique patterns. Similarly, identical twins would exhibit four different iris patterns. These characteristics make it attractive for use as a biometric feature to identify individuals. Pattern recognition and image processing algorithms can be used to extract the unique patterns of iris from an eye image and encode it into an iris template. This iris template contains mathematical representation of the unique information stored in the iris and allows comparisons to be made between templates. Since 1990s, many researchers have worked on this problem. Human iris recognition process is basically divided into four steps, • Localization: Inner and outer boundaries of the iris are extracted.

Journal Article
TL;DR: Image Compression using Artificial Neural Networks is a topic where research is being carried out in various directions towards achieving a generalized and economical network.
Abstract: Image Compression using Artificial Neural Networks is a topic where research is being carried out in various directions towards achieving a generalized and economical network. Feedforward Networks using Back propagation Algorithm adopting the method of steepest descent for error minimization is popular and widely adopted and is directly applied to image compression. Various research works are directed towards achieving quick convergence of the network without loss of quality of the restored image. In general the images used for compression are of different types like dark image, high intensity image etc. When these images are compressed using Back-propagation Network, it takes longer time to converge. The reason for this is, the given image may contain a number of distinct gray levels with narrow difference with their neighborhood pixels. If the gray levels of the pixels in an image and their neighbors are mapped in such a way that the difference in the gray levels of the neighbors with the pixel is minimum, then compression ratio as well as the convergence of the network can be improved. To achieve this, a Cumulative distribution function is estimated for the image and it is used to map the image pixels. When the mapped image pixels are used, the Back-propagation Neural Network yields high compression ratio as well as it converges quickly. Keywords—Back-propagation Neural Network, Cumulative Distribution Function, Correlation, Convergence.

Journal Article
TL;DR: The authors conclude that national ID schemes will play a major role in helping governments reap the benefits of e-government if the three advanced technologies of smart card, biometrics and public key infrastructure (PKI) are utilised to provide a reliable and trusted authentication medium for e- government services.
Abstract: The study investigated the practices of organisations in Gulf Cooperation Council (GCC) countries with regards to G2C egovernment maturity. It reveals that e-government G2C initiatives in the surveyed countries in particular, and arguably around the world in general, are progressing slowly because of the lack of a trusted and secure medium to authenticate the identities of online users. The authors conclude that national ID schemes will play a major role in helping governments reap the benefits of e-government if the three advanced technologies of smart card, biometrics and public key infrastructure (PKI) are utilised to provide a reliable and trusted authentication medium for e-government services. Keywords—e-Government, G2C, national ID, online authentication, biometrics, PKI, smart card.

Journal Article
TL;DR: Based on the concept and classification of CI, its technical stack is briefly discussed from four views, i.e., form of input business models, identification goals, identification strategies, and identification process, and some significantly promising tendency about research on this problem are concluded.
Abstract: With deep development of software reuse, componentrelated technologies have been widely applied in the development of large-scale complex applications. Component identification (CI) is one of the primary research problems in software reuse, by analyzing domain business models to get a set of business components with high reuse value and good reuse performance to support effective reuse. Based on the concept and classification of CI, its technical stack is briefly discussed from four views, i.e., form of input business models, identification goals, identification strategies, and identification process. Then various CI methods presented in literatures are classified into four types, i.e., domain analysis based methods, cohesion-coupling based clustering methods, CRUD matrix based methods, and other methods, with the comparisons between these methods for their advantages and disadvantages. Additionally, some insufficiencies of study on CI are discussed, and the causes are explained subsequently. Finally, it is concluded with some significantly promising tendency about research on this problem. Keywords—Business component, component granularity, component identification, reuse performance.

Journal Article
TL;DR: The application of ANN for software quality prediction using ObjectOriented metrics showed that the Mean Absolute Relative Error (MARE) was 0.265 of ANN model, indicating that ANN method was useful in constructing software quality model.
Abstract: Importance of software quality is increasing leading to development of new sophisticated techniques, which can be used in constructing models for predicting quality attributes. One such technique is Artificial Neural Network (ANN). This paper examined the application of ANN for software quality prediction using ObjectOriented (OO) metrics. Quality estimation includes estimating maintainability of software. The dependent variable in our study was maintenance effort. The independent variables were principal components of eight OO metrics. The results showed that the Mean Absolute Relative Error (MARE) was 0.265 of ANN model. Thus we found that ANN method was useful in constructing software quality model. Keywords—Software quality, Measurement, Metrics, Artificial neural network, Coupling, Cohesion, Inheritance, Principal component analysis.

Journal Article
TL;DR: A new fitness function for approximate information retrieval which is very fast and very flexible, than cosine similarity fitness function is presented.
Abstract: This study investigates the use of genetic algorithms in information retrieval. The method is shown to be applicable to three well-known documents collections, where more relevant documents are presented to users in the genetic modification. In this paper we present a new fitness function for approximate information retrieval which is very fast and very flexible, than cosine similarity fitness function. Keywords—Cosine similarity, Fitness function, Genetic Algorithm, Information Retrieval, Query learning.

Journal Article
TL;DR: A measure of similarity between two clusterings of the same dataset produced by two different algorithms, or even the same algorithm, is introduced and can be used to identify the best clustering algorithm for a specific problem at hand.
Abstract: This paper introduces a measure of similarity between two clusterings of the same dataset produced by two different algorithms, or even the same algorithm (K-means, for instance, with different initializations usually produce different results in clustering the same dataset). We then apply the measure to calculate the similarity between pairs of clusterings, with special interest directed at comparing the similarity between various machine clusterings and human clustering of datasets. The similarity measure thus can be used to identify the best (in terms of most similar to human) clustering algorithm for a specific problem at hand. Experimental results pertaining to the text categorization problem of a Portuguese corpus (wherein a translation-into-English approach is used) are presented, as well as results on the well-known benchmark IRIS dataset. The significance and other potential applications of the proposed measure are discussed.

Journal Article
TL;DR: To adapt itself with the environmental changes, the spatial and temporal constraints are also applied to the model adaptation which makes the method applicable to motion detection in video stream compression as well.
Abstract: to adapt itself with the environmental changes. The spatial and temporal constraints are also applied to the model adaptation which makes the method applicable to motion detection in video stream compression as well. These considerations put our algorithm in both pixel-based and region based groups of methods. The remainder of this paper is organized as follows. Section II describes the different color space conversions that are used in the skin color modeling. Section III discusses the Gaussian mixture model followed by Section IV describes the modified Expectation maximization algorithm. Section V explains model adaptation. Experimental results are discussed in Section VI. Finally, conclusions are given in Section VII. II. SKIN COLOR MODELING Pixel-based skin color detection methods aim at introducing a tool for measuring the distance of each pixel color to skin color tones. The color itself can be represented in many different ways among which the most widely used ones are summarized below.

Journal ArticleDOI
TL;DR: It is concluded that ANFIS presents the best performance compared to MLP, RBF and PNN networks in this particular application.
Abstract: In this paper, different approaches to solve the forward kinematics of a three DOF actuator redundant hydraulic parallel manipulator are presented. On the contrary to series manipulators, the forward kinematic map of parallel manipulators involves highly coupled nonlinear equations, which are almost impossible to solve analytically. The proposed methods are using neural networks identification with different structures to solve the problem. The accuracy of the results of each method is analyzed in detail and the advantages and the disadvantages of them in computing the forward kinematic map of the given mechanism is discussed in detail. It is concluded that ANFIS presents the best performance compared to MLP, RBF and PNN networks in this particular application.

Journal Article
TL;DR: In this article, the real procedure of medical diagnosis which usually is employed by physicians was analyzed and converted to a machine implementable format, and after selecting some symptoms of eight different diseases, a data set contains the information of a few hundreds cases was configured and applied to a MLP neural network.
Abstract: In this paper, application of artificial neural networks in typical disease diagnosis has been investigated. The real procedure of medical diagnosis which usually is employed by physicians was analyzed and converted to a machine implementable format. Then after selecting some symptoms of eight different diseases, a data set contains the information of a few hundreds cases was configured and applied to a MLP neural network. The results of the experiments and also the advantages of using a fuzzy approach were discussed as well. Outcomes suggest the role of effective symptoms selection and the advantages of data fuzzificaton on a neural networks-based automatic medical diagnosis system.

Journal Article
TL;DR: Multiscale neural training with modifications in the input training vectors is adopted in this paper to acquire its advantage in training higher resolution character images and selective thresholding using minimum distance technique is proposed to increase the level of accuracy of character recognition.
Abstract: Advancement in Artificial Intelligence has lead to the developments of various “smart” devices. Character recognition device is one of such smart devices that acquire partial human intelligence with the ability to capture and recognize various characters in different languages. Firstly multiscale neural training with modifications in the input training vectors is adopted in this paper to acquire its advantage in training higher resolution character images. Secondly selective thresholding using minimum distance technique is proposed to be used to increase the level of accuracy of character recognition. A simulator program (a GUI) is designed in such a way that the characters can be located on any spot on the blank paper in which the characters are written. The results show that such methods with moderate level of training epochs can produce accuracies of at least 85% and more for handwritten upper case English characters and numerals. Keywords—Character recognition, multiscale, backpropagation, neural network, minimum distance technique.

Journal Article
TL;DR: In this paper, the authors proposed a dynamic decision model to decide the "best" network at the best time moment to handoff in 4G wireless networks, which not only meets the individual user needs but also improves the whole system performance by reducing the unnecessary handoffs.
Abstract: The convergence of heterogeneous wireless access technologies characterizes the 4G wireless networks. In such converged systems, the seamless and efficient handoff between different access technologies (vertical handoff) is essential and remains a challenging problem. The heterogeneous co-existence of access technologies with largely different characteristics creates a decision problem of determining the “best” available network at “best” time to reduce the unnecessary handoffs. This paper proposes a dynamic decision model to decide the “best” network at “best” time moment to handoffs. The proposed dynamic decision model make the right vertical handoff decisions by determining the “best” network at “best” time among available networks based on, dynamic factors such as “Received Signal Strength(RSS)” of network and “velocity” of mobile station simultaneously with static factors like Usage Expense, Link capacity(offered bandwidth) and power consumption. This model not only meets the individual user needs but also improve the whole system performance by reducing the unnecessary handoffs. Keywords—Dynamic decision model, Seamless handoff, Vertical handoff, Wireless networks.

Journal Article
TL;DR: The conjugate gradient optimization algorithm is combined with the modified back propagation algorithm to yield a computationally efficient algorithm for training multilayer perceptron (MLP) networks (CGFR/AG).
Abstract: The conjugate gradient optimization algorithm is combined with the modified back propagation algorithm to yield a computationally efficient algorithm for training multilayer perceptron (MLP) networks (CGFR/AG). The computational efficiency is enhanced by adaptively modifying initial search direction as described in the following steps: (1) Modification on standard back propagation algorithm by introducing a gain variation term in the activation function, (2) Calculation of the gradient descent of error with respect to the weights and gains values and (3) the determination of a new search direction by using information calculated in step (2). The performance of the proposed method is demonstrated by comparing accuracy and computation time with the conjugate gradient algorithm used in MATLAB neural network toolbox. The results show that the computational efficiency of the proposed method was better than the standard conjugate gradient algorithm. Keywords—Adaptive gain variation, back-propagation, activation function, conjugate gradient, search direction.

Journal ArticleDOI
TL;DR: A new similarity measure giving realistic results and closer relations to reality for concepts not located in the same path is presented, suited for comparing concepts in ontology.
Abstract: In order to overcome this problem, we propose, in this paper, a new similarity measure giving realistic results and closer relations to reality for concepts not located in the same path. The paper presents a similarity measure that is suited for comparing concepts in ontology. Although finding similar concepts is a core task in the area of ontology alignment/merging [5][6]. The proposed measure can be adopted effectively in this field.