scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Computer Theory and Engineering in 2009"


Journal ArticleDOI
TL;DR: Non linear regression method is found to be suitable to train the SVM for weather prediction and the results are compared with Multi Layer Perceptron (MLP) trained with back-propagation algorithm and the performance of SVM is finding to be consistently better.
Abstract: —Weather prediction is a challenging task for researchers and has drawn a lot of research interest in the recent years. Literature studies have shown that machine learning techniques achieved better performance than traditional statistical methods. This paper presents an application of Support Vector Machines (SVMs) for weather prediction. Time series data of daily maximum temperature at a location is analyzed to predict the maximum temperature of the next day at that location based on the daily maximum temperatures for a span of previous n days referred to as order of the input. Performance of the system is observed over various spans of 2 to 10 days by using optimal values of the kernel function. Non linear regression method is found to be suitable to train the SVM for this application. The results are compared with Multi Layer Perceptron (MLP) trained with back-propagation algorithm and the performance of SVM is found to be consistently better.

281 citations


Journal ArticleDOI
TL;DR: Quantitative and qualitative comparisons of the results obtained by the proposed method with the results achieved from the otherSpeckle noise reduction techniques demonstrate its higher performance for speckle reduction.
Abstract: —In medical image processing, image denoising has become a very essential exercise all through the diagnose. Arbitration between the perpetuation of useful diagnostic information and noise suppression must be treasured in medical images. In general we rely on the intervention of a proficient to control the quality of processed images. In certain cases, for instance in Ultrasound images, the noise can restrain information which is valuable for the general practitioner. Consequently medical images are very inconsistent, and it is crucial to operate case to case. This paper presents a wavelet-based thresholding scheme for noise suppression in ultrasound images. Quantitative and qualitative comparisons of the results obtained by the proposed method with the results achieved from the other speckle noise reduction techniques demonstrate its higher performance for speckle reduction

212 citations


Journal ArticleDOI
TL;DR: The comparative analysis of various Image Edge Detection methods is presented and it has been shown that the Canny's edge detection algorithm performs better than all these operators under almost all scenarios.
Abstract: —Edges characterize boundaries and are therefore considered for prime importance in image processing. Edge detection filters out useless data, noise and frequencies while preserving the important structural properties in an image. Since edge detection is in the forefront of image processing for object detection, it is crucial to have a good understanding of edge detection methods. In this paper the comparative analysis of various Image Edge Detection methods is presented. The evidence for the best detector type is judged by studying the edge maps relative to each other through statistical evaluation. Upon this evaluation, an edge detection method can be employed to characterize edges to represent the image for further analysis and implementation. It has been shown that the Canny’s edge detection algorithm performs better than all these operators under almost all scenarios. Index Terms —About four key words or phrases in alphabetical order, separated by commas. I. I NTRODUCTION

200 citations


Journal ArticleDOI
TL;DR: An overview of previous and present conditions of the PSO algorithm as well as its opportunities and challenges is presented and all major PSO-based methods are comprehensively surveyed.
Abstract: The Particle Swarm Optimization (PSO) algorithm, as one of the latest algorithms inspired from the nature, was introduced in the mid 1990s and since then, it has been utilized as an optimization tool in various applications, ranging from biological and medical applications to computer graphics and music composition. In this paper, following a brief introduction to the PSO algorithm, the chronology of its evolution is presented and all major PSO-based methods are comprehensively surveyed. Next, these methods are studied separately and their important factors and parameters are summarized in a comparative table. In addition, a new taxonomy of PSO-based methods is presented. It is the purpose of this paper is to present an overview of previous and present conditions of the PSO algorithm as well as its opportunities and challenges. Accordingly, the history, various methods, and taxonomy of this algorithm are discussed and its different applications together with an analysis of these applications are evaluated. among agents on survival of the fittest. Algorithms related to this group include Evolutionary Programming (EP), Genetic Programming (GP), and Differential Evolutionary (DE). The Ontogeny group is associated with the algorithms in which the adaptation of a special organism to its environment is happened. The algorithms like PSO and Genetic Algorithms (GA) are of this type and in fact, they have a cooperative nature in comparison with other types (16). The advantages of above-mention ed categories can be noted as their ability to be developed for various applications and not needing the previous knowledge of the problem space. Their drawbacks include no guarantee in finding an optimum solution and high computational costs in completing Fitness Function (F.F.) in intensive iterations. Among the aforementioned paradigms, the PSO algorithm seems to be an attractive one to study since it has a simple but efficient nature added to being novel. It can even be a substitution for other basic and important evolutionary algorithms. The most important similarity between these paradigms and the GA is in having the seam interactive population. This algorithm, compared to GA, has a faster speed in finding the solutions close to the optimum and it is faster than GA in premature convergence (4).

194 citations


Journal ArticleDOI
TL;DR: In this paper distance measurement of an obstacle by using separate ultrasonic transmitter, receiver and a microcontroller is presented and the experimental setup and results are described.
Abstract: Distance measurement of an object in the path of a person, equipment, or a vehicle, stationary or moving is used in a large number of applications such as robotic movement control, vehicle control, blind man's walking stick, medical applications, etc. Measurement using ultrasonic sensors is one of the cheapest among various options. In this paper distance measurement of an obstacle by using separate ultrasonic transmitter, receiver and a microcontroller is presented. The experimental setup and results are described and explained.

68 citations


Journal ArticleDOI
TL;DR: A new singular value decomposition (SVD) and discrete wavelet transformation (DWT) based technique is proposed for hiding watermark in full frequency band of color images (DSFW) and it is observed that the quality of the watermark is maintained with the value of 36dB.
Abstract: Due to the advancement in Computer technology and readily available tools, it is very easy for the unknown users to produce illegal copies of multimedia data which are floating across the Internet. In order to protect those multimedia data on the Internet many techniques are available including various encryption techniques, steganography techniques, watermarking techniques and information hiding techniques. Digital watermarking is a technique in which a piece of digital information is embedded into an image and extracted later for ownership verification. Secret digital data can be embedded either in spatial domain or in frequency domain of the cover data. In this paper, a new singular value decomposition (SVD) and discrete wavelet transformation (DWT) based technique is proposed for hiding watermark in full frequency band of color images (DSFW). The quality of the watermarked image and extracted watermark is measured using peak signal to noise ratio (PSNR) and normalized correlation (NC) respectively. It is observed that the quality of the watermarked image is maintained with the value of 36dB. Robustness of proposed algorithm is tested for various attacks including salt and pepper noise and Gaussian noise, cropping and JPEG compression.

67 citations


Journal ArticleDOI
TL;DR: The results indicate that Q-PAR is able to discover the required path with lesser overheads, the network lifetime increased by around 24-29 % for small networks, the packet delivery ratio improved and the packet experienced a low average delay.
Abstract: —In this paper, QoS based power aware routing protocol (Q-PAR) is proposed and evaluated that selects an energy stable QoS constrained end to end path. The selected route is energy stable and satisfies the bandwidth constraint of the application. The protocol Q-PAR is divided in to two phases. In the first route discovery phase, the bandwidth and energy constraints are built in into the DSR route discovery mechanism. In the event of an impending link failure, the second phase, a repair mechanism is invoked to search for an energy stable alternate path locally. Simulation was performed to determine the network lifetime, throughput and end to end delay experienced by packets and for other parameters. The results indicate that Q-PAR is able to discover the required path with lesser overheads, the network lifetime increased by around 24-29 % for small networks (20-50 nodes), the packet delivery ratio improved and the packet experienced a low average delay. Moreover the local repair mechanism was able to find an alternate path in most of the cases enhanced the network lifetime and delayed the repair and reconstruction of the route.

66 citations


Journal ArticleDOI
TL;DR: The results show that the proposed modified artificial fish swarm algorithm (MAFSA) is insensitive to initial values, has a strong robustness and has the faster convergence speed and better estimation precision than the estimation method by Genetic Algorithm and simulated annealing.
Abstract: —one of the open issues in grid computing is efficient job scheduling. Job scheduling is known to be NP-complete, therefore the use of non-heuristics is the de facto approach in order to cope in practice with its difficulty. In this paper, we propose a modified artificial fish swarm algorithm (MAFSA) for job scheduling. The basic idea of AFSA is to imitate the fish behaviors such as preying, swarming, and following with local search of fish individual for reaching the global optimum. The results show that our method is insensitive to initial values, has a strong robustness and has the faster convergence speed and better estimation precision than the estimation method by Genetic Algorithm (GA) and simulated annealing (SA).

60 citations


Journal ArticleDOI
TL;DR: A method of facial expression recognition, based on Eigenspaces is presented, using a method that was modified from eigenface recognition to identify the user’s facial expressions from the input images.
Abstract: 638 Abstract— Making Computer Systems to recognize and infer facial expressions from the user image is a challenging research topic. A method of facial expression recognition, based on Eigenspaces is presented in this paper. In our approach we identify the user’s facial expressions from the input images, using a method that was modified from eigenface recognition. We have evaluated our method in terms of recognition accuracy using two well known Facial Expressions databases, CohnKanade facial expression database and Japanese Female Facial Expression database. The experimental results show the effectiveness of our scheme.

55 citations


Journal ArticleDOI
TL;DR: A description and comparison between encryption methods and representative video algorithms were presented and a trade-off between quality of video streaming and choice of encryption algorithm were shown.
Abstract: With the rapid development of various multimedia technologies, more and more multimedia data are generated and transmitted in the medical, commercial, and military fields, which may include some sensitive information which should not be accessed by or can only be partially exposed to the general users. Therefore, security and privacy has become an important. Over the last few years several encryption algorithms have applied to secure video transmission. While a large number of multimedia encryption schemes have been proposed in the literature and some have been used in real products, cryptanalytic work has shown the existence of security problems and other weaknesses in most of the proposed multimedia encryption schemes. In this paper, a description and comparison between encryption methods and representative video algorithms were presented. With respect not only to their encryption speed but also their security level and stream size. A trade-off between quality of video streaming and choice of encryption algorithm were shown. Achieving an efficiency, flexibility and security is a challenge of researchers.

55 citations


Journal ArticleDOI
TL;DR: Elect evaluation of six of the most common encryption algorithms on power consumption for wireless devices namely: AES comparison has been conducted for those encryption algorithms at different settings for each algorithm such as different sizes of data blocks, different data types, battery power consumption, date transmission through wireless network and finally encryption/decryption speed.
Abstract: —s the popularity of wireless networks increases, so does the need to protect them. Encryption algorithms play a main role in information security systems. On the other side, those algorithms consume a significant amount of computing resources such as CPU time, memory, and battery power. This paper illustrates the key concepts of security, wireless networks, and security over wireless networks. Wireless security is demonstrated by applying the common security standards like (802.11 WEP and 802.11i WPA,WPA2) and provides evaluation of six of the most common encryption algorithms on power consumption for wireless devices namely: AES comparison has been conducted for those encryption algorithms at different settings for each algorithm such as different sizes of data blocks, different data types, battery power consumption, date transmission through wireless network and finally encryption/decryption speed. Experimental results are given to demonstrate the effectiveness of each algorithm.

Journal ArticleDOI
TL;DR: This paper gives an energy efficient approach to query processing by implementing new optimization techniques applied to in-network aggregation and evaluates it through several simulations to prove its efficiency, competence and effectiveness.
Abstract: —Existing sensor network data aggregation techniques assume that the nodes are preprogrammed and send data to a central sink for offline querying and analysis. This approach faces two major drawbacks. First, the system behavior is preprogrammed and cannot be modified on the fly. Second, the increased energy wastage due to the communication overhead will result in decreasing the overall system lifetime. Thus, energy conservation is of prime consideration in sensor network protocols in order to maximize the network's operational lifetime. In this paper, we give an energy efficient approach to query processing by implementing new optimization techniques applied to in-network aggregation. We first discuss earlier approaches in sensors data management and highlight their disadvantages. We then present our approach and evaluate it through several simulations to prove its efficiency, competence and effectiveness.

Journal ArticleDOI
TL;DR: Results show that the value of MMRE (Mean of Magnitude of Relative Error) applying Fuzzy Logic was substantially lower than MMRE applying by other FuzzY Logic models.
Abstract: Software developing has always been characterised by some metrics. In this area, one of the greatest challenges for software developers is predicting the development effort for a software system based on developer abilities, size, complexity and other metrics for the last decades. The ability to give a good estimation on software development efforts is required by the project managers. Most of the traditional techniques such as function points, regression models, COCOMO, etc, require a long-term estimation process. New paradigms as Fuzzy Logic may offer an alternative for this challenge. Many of the problems of the existing effort estimation models can be solved by incorporating Fuzzy Logic. This paper described an enhanced Fuzzy Logic model for the estimation of software development effort and proposed a new approach by applying Fuzzy Logic for software effort estimates. Results show that the value of MMRE (Mean of Magnitude of Relative Error) applying Fuzzy Logic was substantially lower than MMRE applying by other Fuzzy Logic models.

Journal ArticleDOI
TL;DR: A PID compensator which adjusted by genetic algorithm is designed then another compensator will be designed by combining two methods, Integral controller and optimal State Feedback controller, showing that the optimal controller significantly reduced the overshoot, settling time and has the best performance encountering with system uncertainties.
Abstract: Velocity control of DC motors is an important issue also shorter settling time is desired. In this paper at first a PID compensator which adjusted by genetic algorithm is designed then another compensator will be designed by combining two methods, Integral controller and optimal State Feedback controller (I&S.F.). In the second compensator, design specifications, depend on choosing weighting matrices Q and R, we use the Genetic Algorithm (GA) to find the proper weighting matrices. Of course Kalman filter is used as a system observer in order to increasing the system robustness. Then the performance of the control techniques is compared in terms of rise time, settling time, tracking error, and robustness with respect to modeling errors and disturbances. The controller design process and implementation requirements are also discussed. Then the comparison between the PID control and the optimal control shows that the optimal controller significantly reduced the overshoot, settling time and has the best performance encountering with system uncertainties. Also we apply noise and 20% parameters variation for DC motor and compare the results. According to the simulation results, the second controller has better performance than the PID controller.

Journal ArticleDOI
TL;DR: From the evaluation results, it is observed that the proposed approach shows improvement in recommendations than the traditional algorithms and is compared to traditional user-based and item-based recommendation algorithms.
Abstract: —The Collaborative Recommender Systems provide personalized recommendations to users using the rating profiles of different users. These systems should maintain accurate model of user's interests and needs by collecting the user preferences either explicitly or implicitly using numerical scale. Although most of the current systems maintain single user ratings in the user-item ratings matrix, these single ratings do not provide useful information regarding the reason behind the user's preference. However, the multicriteria based systems provide an opportunity to compute accurate recommendations by maintaining the details of user preferences in multiple aspects. Apart from this, the user ratings are usually subjective, imprecise and vague in nature, because it is based on user's perceptions and opinions. Fuzzy sets seem to be an appropriate paradigm to handle the uncertainty and fuzziness of human decision making behavior and to effectively model the natural complexity of human behavior. Because of these reasons, this paper adopts the Fuzzy linguistic approach to efficiently represent the user ratings and the Fuzzy Multicriteria Decision Making (FMCDM) approach to accurately rank the relevant items to a user. This work empirically evaluates the proposed approach's performance through a Music Recommender system developed for this research. The proposed approach's performance is compared to traditional user-based and item-based recommendation algorithms. From the evaluation results, it is observed that the proposed approach shows improvement in recommendations than the traditional algorithms.

Journal ArticleDOI
TL;DR: This paper presents a novel trust-based scheme for identifying and isolating nodes that create a wormhole in the network without engaging any cryptographic means and demonstrates that it functions effectively in the presence of malicious colluding nodes and does not impose any unnecessary conditions upon the network establishment and operation phase.
Abstract: Wireless networks are suspectible to many attacks, including an attack known as the wormhole attack. The wormhole attack is very powerful, and preventing the attack has proven to be very difficult. A strategic placement of the wormhole can result in a significant breakdown in communication across a wireless network. In such attacks two or more malicious colluding nodes create a higher-level virtual tunnel in the network, which is employed to transport packets between the tunnel endpoints. These tunnels emulate shorter links in the network and so act as benefit to unsuspecting network nodes which by default seek shorter routes. This paper present a novel trust-based scheme for identifying and isolating nodes that create a wormhole in the network without engaging any cryptographic means. With the help of extensive simulations, we demonstrate that our scheme functions effectively in the presence of malicious colluding nodes and does not impose any unnecessary conditions upon the network establishment and operation phase. An ad-hoc network is built, operated, and maintained by its constituent wireless nodes. These nodes generally have a limited transmission range and so each node seeks the assistance of its neighbouring nodes in forwarding packets . In order, to establish routes between nodes, which are farther than a single hop, specially configured routing protocol are engaged. The unique feature of these protocols is their ability to trace routes in spite of a dynamic topology. The nodes in an ad-hoc network generally have limited battery power and so active routing protocols endeavor to save upon this, by discovering routes only when they are essentially required. In contrast, proactive routing protocols continuously establish and maintain routes, so as to avoid the latency that occurs during new route discoveries. Both types of routing protocols require persistent cooperative behaviour, with intermediate nodes primarily contributing to

Journal ArticleDOI
TL;DR: The proposed algorithm integrates the fuzzy set concepts and Apriori mining algorithm to find useful fuzzy association rules and then hide them using privacy preserving technique and shows that the proposed algorithm hides more rules and maintains higher data quality of the released database.
Abstract: —Data mining is the process of extracting useful patterns or knowledge from large databases. However, data mining also poses a threat to privacy and information protection if not done or used properly. Therefore, researchers need to investigate data mining algorithm from a new point of view that is of personal privacy. Many algorithms have been developed to hide association rules discovered from a binary database. But in real applications, data mostly consists of quantitative values. In this paper, we thus propose a fuzzy association rules hiding algorithm for hiding rules discovered from a quantitative database. The proposed algorithm integrates the fuzzy set concepts and Apriori mining algorithm to find useful fuzzy association rules and then hide them using privacy preserving technique. For hiding purpose, we decrease the support of the rule to be hidden by decreasing the support value of item in either Left Hand Side (L.H.S.) or Right Hand Side (R.H.S) of the rule. Experimental results show that the proposed algorithm hides more rules and maintains higher data quality of the released database.

Journal ArticleDOI
TL;DR: A new genetic algorithm, named TDGASA, is presented which its running time depends on the number of tasks in the scheduling problem, and its computation time to find a sub-optimal schedule is improved by Simulated Annealing (SA).
Abstract: The static task scheduling problem in distributed systems is very important because of optimal usage of available machines and accepted computation time for scheduling algorithm. Solving this problem using the dynamic programming and the back tracking needs much more time. Therefore, there are more attempts to solve it using the heuristic methods. In this paper, a new genetic algorithm, named TDGASA, is presented which its running time depends on the number of tasks in the scheduling problem. Then, the computation time of TDGASA to find a sub-optimal schedule is improved by Simulated Annealing (SA). The results show that the computation time of the proposed algorithm decreases compared to an existing GA-based algorithm, although, the completion time of the final scheduled task in the system decreases a little.

Journal ArticleDOI
TL;DR: The Self Organizing Map is explored and visualize and the capabilities of SOM in text classification are portrayed, which enable the user query process, to be precise and optimized.
Abstract: —Innovative methods that are user friendly and efficient are needed for retrieval of textual information available on the World Wide Web. The self-organizing map (SOM) is one of the most widely used neural network algorithms. SOM can be used to identify clusters of documents with similar context and content. In this paper, we explore and visualize the Self Organizing Map and discuss how to classify text documents. The paper also portrays the capabilities of SOM in text classification. We also discuss about experiments done using 20 news group dataset. I. INTRODUCTION In the current scenario, browsing for exact information has become very tedious job as the number of electronic documents on the Internet has grown gargantuan and still is growing. It is necessary to classify the documents into categories so that retrieval of documents becomes easy and more efficient. For example, we have enough information to classify News articles of ten years, but it is impossible to see what is going on without mechanical assist such assistance. We try to overcome this difficulty by efficiently organizing, the documents into set of related topics or categories, which enable the user query process, to be precise and optimized. Basically document classification can be defined as content based assignment of one or more predefined categories or topics to documents ie., collection of words determine the best fit category for this collection of words. The goal of all document classifiers is to assign documents into one or more content categories such as technology, entertainment, sports, politics, etc., Classification of any type of text document is possible, including traditional documents such as memos and reports as well as e-mails, web pages, etc., Many techniques have been proposed, such as Principal Component Analysis and Singular Value Decomposition. The results of these techniques are difficult to translate and sometimes often they lose accuracy. To overcome these problems, document vectors based on significant words in each category is adopted. Even vector, based documents are of very high dimension, which causes difficulty in

Journal ArticleDOI
TL;DR: The relationship among upper approximations based on topological spaces are explored andCovering-based rough set theory is an extension to classical rough set from a topological point of view.
Abstract: Covering-based rough set theory is an extension to classical rough set. The main purpose of this paper is to study covering rough sets from a topological point of view. The relationship among upper approximations based on topological spaces are explored.


Journal ArticleDOI
TL;DR: This paper tries to present a fair comparison between the most common and used algorithms in data encryption field according to power consumption by providing evaluation of six of the most popular encryption algorithms namely: AES (Rijndael), DES, 3DES, RC2, Blowfish, and RC6.
Abstract: Encryption algorithms are known to be computationally intensive.They play a main role in information security systems. On the other side, those algorithms consume a significant amount of computing resources such as CPU time, memory, and battery power. This paper tries to present a fair comparison between the most common and used algorithms in data encryption field according to power consumption. It provides evaluation of six of the most common encryption algorithms namely: AES (Rijndael), DES, 3DES, RC2, Blowfish, and RC6. A comparison has been conducted for those encryption algorithms at different settings for each algorithm such as different sizes of data blocks, different data types, and finally encryption/decryption speed. Experimental results are given to demonstrate the effectiveness of each algorithm.

Journal ArticleDOI
TL;DR: An effective system for classification of electroencephalogram (EEG) signals that contain credible cases of brain tumor is presented and enables early detection of brain tumors initiating quicker clinical responses.

Journal ArticleDOI
TL;DR: A novel oblivious and robust multiple image watermarking scheme using Multiple Descriptions (MD) and Quantization Index Modulation (QIM) of the host image to achieve robustness to both local and global attacks.
Abstract: A novel oblivious and robust multiple image watermarking scheme using Multiple Descriptions (MD) and Quantization Index Modulation (QIM) of the host image is presented in this paper. Watermark embedding is done at two stages. In the first stage, Discrete Cosine Transform (DCT) of odd description of the host image is computed. The watermark image is embedded in the resulting DC coefficients. In the second stage, a copy of the watermark image is embedded in the watermarked image generated at the first stage. This enables us to achieve robustness to both local and global attacks. This algorithm is highly robust for different attacks on the watermarked image and superior in terms of Peak Signal to Noise Ratio (PSNR) and Normalized Cross correlation (NC).



Journal ArticleDOI
TL;DR: A feature extraction method is proposed that measures the distribution of black and white pixels representing various strokes in a character image by computing the weights on all the four corners on a pixel due to its neighboring black pixels, named as Neighborhood Pixels Weights (NPW).
Abstract:  Abstract—In OCR applications, the feature extraction methods used to recognize document images play an important role. The feature extraction methods may be statistical, structural or transforms and series expansion. The structural features are very difficult to extract particularly in handwritten applications. The structural behavior of the strokes existing in the handwritten expressions can be estimated through statistical methods too. In this paper, a feature extraction method is proposed that measures the distribution of black and white pixels representing various strokes in a character image by computing the weights on all the four corners on a pixel due to its neighboring black pixels. The feature is named as Neighborhood Pixels Weights (NPW). Its recognition performance is compared with some feature extraction methods, which have been generally used as secondary feature extraction methods for the recognition of many scripts in literature, on noisy and non-noisy handwritten character images. The experiments have been conducted using 17000 Devanagari handwritten character images. The experiments have been made using two classifiers i.e. Probabilistic Neural Network and k-Nearest Neighbor Classifier. NPW feature is better as compared to other features, studied here, in noisy and noise-less situation. considered. The structural features are based on the geometrical and topological properties of a character under consideration and these properties may be local or global (1). A character is composed of number of components in the form of strokes. These strokes may be lines, arcs, curves, etc and may or may not be connected to each other depending upon the structure of a character. These components are also called as stroke primitives and can be extracted from either skeleton or contour of a character image. In structural based recognition process, the various stroke primitives of a character are extracted and approximated. The relationships between various stroke components are established. It is somewhat difficult to extract and approximate the various stroke primitives existing in a character image as in some cases the strokes may not touch where touching is required and strokes may unnecessarily touch where touching is not required in the basic structure of a character while printing or writing. This approach also requires matching an approximated stroke primitive with stored prototypes which is not only complex to model but also requires multi-level heuristics. Moreover, these features are extracted from binary images only. The problems faced with structural features can be easily overcome with statistical features which are based on statistical distribution of black and white pixels in a character image. These features may be extracted from binary or gray scale images and are invariant to character distortions and writing styles to some extent. The features are easy to extract and can be computed with high speed as at a given pixel only some arithmetic or logic operations are required to perform which take less computational time and are not difficult to

Journal ArticleDOI
TL;DR: A requirement engineering process that composes UML scenarios for obtain a global description of a given service of the system and implementation code from the UML use case (service).
Abstract: Object-oriented software development matured significantly during the past ten years. The Unified Modeling Language (UML) is generally accepted as the de facto standard modeling notation for the analysis and design of the object oriented software systems. This language provides a suitable framework for scenario acquisition using use case diagrams and sequence or collaboration diagrams. In this paper, we suggest a requirement engineering process that composes UML scenarios for obtain a global description of a given service of the system and implementation code from the UML use case (service). We suggest four operators: sequential operator, concurrent operator, conditional operator and iteration operator to compose a set of scenarios that describe a use case of a given system. We developed algorithm and tool support that can automatically produce a global sequence diagram representing any way of composing them and to offer a code generation of sequence diagram resulting.

Journal ArticleDOI
TL;DR: In the present paper, two characteristics of three different waveguides employed in arrayed waveguide grating in passive optical networks (PON) where rates of variations are processed are investigated.
Abstract: In the present paper, we have investigated two characteristics of three different waveguides employed in arrayed waveguide grating (AWG) in passive optical networks (PON) where rates of variations are processed. Both the thermal and the spectral effects are taken into account. The waveguides are made of Lithium Niobate, germania-doped silica, and Polymethyl metha acrylate (PMMA) polymer. The thermal and spectral sensitivities of optical devices are also analyzed. In general, both qualitative and quantitative analysis of the temporal and spectral responses of AWG and sensitivity are parametrically processed over wide ranges of the set of affecting parameters.

Journal ArticleDOI
TL;DR: Different new approaches that can be effectively used as an authentication tool in 3G mobile communications are enlighten.
Abstract: Authentication of mobile subscriber is a challenge of future researchers due to increasing security threats and attacks with the enhanced population of wireless traffic. 3G mobile communication system has been developed to speed up the data communication. In general the authentication technique in 2G mobile communication is solely dependent on checking the authenticity of MS (Mobile Station or Subscriber) by challenge/response mechanism. Here authenticity is one-way for which MSC (Mobile or Main Switching Center) checks the validity of MS. 3G mobile communication works on two different switching techniques. One is circuit switching for voice and low speed data communications. The other one is packet switching mainly for data communication, but can afford voice communication like VoIP (Voice Over Internet Protocol), video telephony, multimedia service etc. Generally high speed data communication is established by packet switching process through PDSN (Packet Data Serving Node) servers. In circuit switching (3G network) authentication is mutual where both MS and MSC or network authenticate each other, but in packet switching only network (servers in PDSN) examines the authenticity of MS. In this paper, we enlighten different new approaches that can be effectively used as an authentication tool in 3G mobile communications.