scispace - formally typeset
Search or ask a question

Showing papers in "WSEAS Transactions on Computers archive in 2008"


Journal Article
TL;DR: A way to use the classic statistical methodologies (R/S Rescaled Range analysis and Hurst exponent) to obtain new methods of improving the process efficiency of the prediction chaotic time series with NARX is identified.
Abstract: The prediction of chaotic time series with neural networks is a traditional practical problem of dynamic systems. This paper is not intended for proposing a new model or a new methodology, but to study carefully and thoroughly several aspects of a model on which there are no enough communicated experimental data, as well as to derive conclusions that would be of interest. The recurrent neural networks (RNN) models are not only important for the forecasting of time series but also generally for the control of the dynamical system. A RNN with a sufficiently large number of neurons is a nonlinear autoregressive and moving average (NARMA) model, with "moving average" referring to the inputs. The prediction can be assimilated to identification of dynamic process. An architectural approach of RNN with embedded memory, "Nonlinear Autoregressive model process with eXogenous input" (NARX), showing promising qualities for dynamic system applications, is analyzed in this paper. The performances of the NARX model are verified for several types of chaotic or fractal time series applied as input for neural network, in relation with the number of neurons, the training algorithms and the dimensions of his embedded memory. In addition, this work has attempted to identify a way to use the classic statistical methodologies (R/S Rescaled Range analysis and Hurst exponent) to obtain new methods of improving the process efficiency of the prediction chaotic time series with NARX.

333 citations


Journal ArticleDOI
TL;DR: Investigation of the performance of Differential Evolution and its opposition-based version (ODE) on large scale optimization problems confirms that ODE performs much better than DE when the dimensionality of the problems is increased from 500D to 1000D.
Abstract: This work investigates the performance of Differential Evolution (DE) and its opposition-based version (ODE) on large scale optimization problems. Opposition-based differential evolution (ODE) has been proposed based on DE; it employs opposition-based population initialization and generation jumping to accelerate convergence speed. ODE shows promising results in terms of convergence rate, robustness, and solution accuracy. A recently proposed seven-function benchmark test suite for the CEC-2008 special session and competition on large scale global optimization has been utilized for the current investigation. Results interestingly confirm that ODE outperforms its parent algorithm (DE) on all high dimensional (500D and 1000D) benchmark functions (F1-F7). Furthermore, authors recommend to utilize ODE for more complex search spaces as well. Because results confirm that ODE performs much better than DE when the dimensionality of the problems is increased from 500D to 1000D. All required details about the testing platform, comparison methodology, and also achieved results are provided.

74 citations


Journal Article
TL;DR: The Balanced Productivity Metrics (BPM) strategy and approach in order to design and produce useful project metrics from basic test planning and defect data and several test metrics that can be implemented are described.
Abstract: This paper discusses software test metrics and their ability to show objective evidence necessary to make process improvements in a development organization. When used properly, test metrics assist in the improvement of the software development process by providing pragmatic, objective evidence of process change initiatives. This paper also describes several test metrics that can be implemented, a method for creating a practical approach to tracking & interpreting the metrics, and illustrates one organization's use of test metrics to prove the effectiveness of process changes. Also, this paper provides the Balanced Productivity Metrics (BPM) strategy and approach in order to design and produce useful project metrics from basic test planning and defect data. Software test metrics is a useful for test managers, which aids in precise estimation of project effort, addresses the interests of metric group, software managers of the software organization who are interested in estimating software test effort and improve both development and testing processes.

65 citations


Journal Article
TL;DR: An overview of text steganography is presented and a new approach, named WhiteSteg is proposed in information hiding using interword spacing and inter-paragraph spacing as a hybrid method to reduce the visible detection of the embedded messages.
Abstract: Sending encrypted messages frequently will draw the attention of third parties, i.e. crackers and hackers, perhaps causing attempts to break and reveal the original messages. In this digital world, steganography is introduced to hide the existence of the communication by concealing a secret message inside another unsuspicious message. The hidden message maybe plaintext, or any data that can be represented as a stream of bits. Steganography is often being used together with cryptography and offers an acceptable amount of privacy and security over the communication channel. This paper presents an overview of text steganography and a brief history of steganography along with various existing techniques of text steganography. Highlighted are some of the problems inherent in text steganography as well as issues with existing solutions. A new approach, named WhiteSteg is proposed in information hiding using interword spacing and inter-paragraph spacing as a hybrid method to reduce the visible detection of the embedded messages. WhiteSteg offers dynamic generated cover-text with six options of maximum capacity according to the length of the secret message. Besides, the advantage of exploiting whitespaces in information hiding is discussed. This paper also analyzes the significant drawbacks of each existing method and how WhiteSteg could be recommended as a solution.

56 citations


Journal ArticleDOI
TL;DR: Two individual meatheuristic algorithmic solutions, the ArcGIS Network Analyst and the Ant Colony System (ACS) algorithm, are introduced, implemented and discussed for the identification of optimal routes in the case of Municipal Solid Waste (MSW) collection.
Abstract: During the last decade, metaheuristics have become increasingly popular for effectively confronting difficult combinatorial optimization problems. In the present paper, two individual meatheuristic algorithmic solutions, the ArcGIS Network Analyst and the Ant Colony System (ACS) algorithm, are introduced, implemented and discussed for the identification of optimal routes in the case of Municipal Solid Waste (MSW) collection. Both proposed applications are based on a geo-referenced spatial database supported by a Geographic Information System (GIS). GIS are increasingly becoming a central element for coordinating, planning and managing transportation systems, and so in collaboration with combinatorial optimization techniques they can be used to improve aspects of transit planning in urban regions. Here, the GIS takes into account all the required parameters for the MSW collection (i.e. positions of waste bins, road network and the related traffic, truck capacities, etc) and its desktop users are able to model realistic network conditions and scenarios. In this case, the simulation consists of scenarios of visiting varied waste collection spots in the Municipality of Athens (MoA). The user, in both applications, is able to define or modify all the required dynamic factors for the creation of an initial scenario, and by modifying these particular parameters, alternative scenarios can be generated. Finally, the optimal solution is estimated by each routing optimization algorithm, followed by a comparison between these two algorithmic approaches on the newly designed collection routes. Furthermore, the proposed interactive design of both approaches has potential application in many other environmental planning and management problems.

45 citations


Journal Article
TL;DR: A useful tool is introduced that can be employed to a sophisticated selection defense method for DDoS attacks and a smart taxonomy method of DDOS attacks will be proposed to help selection an appropriate defense mechanism.
Abstract: A Distributed denial of service (DDoS) attack uses multiple machines operating in concern to attack a network or site. It is the most important security problem for IT managers. These attacks are very simple organized for intruders and hence so disruptive. The detection and defense of this attack has specific importance among network specialists. In this paper a new and smart taxonomy of DDoS attack and defense mechanism will be introduced. The attacks taxonomy is introduced using both known and potential attack mechanisms. It comprises all types of attacks and provides a comprehensive point of view for DDoS attacks. We introduce a useful tool that can be employed to a sophisticated selection defense method for DDoS attacks. Furthermore a smart taxonomy method of DDOS attacks will be proposed to help selection an appropriate defense mechanism. This method uses some features of DDOS attacks and classifies it to several clusters by Kmean algorithm and labels each cluster with a defense mechanism. If an IDS detects a DDOS attack, proposed system extract attack features and classify it by KNN (K-Nearest-Neighbor) to determine the cluster in which it belongs to. The defense mechanisms taxonomy is using the currently known approaches. Also the comprehensive defense classification will help to find the appropriate strategy to overcome the DDoS attack.

38 citations


Journal Article
TL;DR: In this article, the authors used an ANN for estimating reference evapotranspiration (ETo) in a semiarid environment of the semi-arid region of the country of Burkina Faso.
Abstract: The well known Penman-Monteith (PM) equation always performs the highest accuracy results of estimating reference evapotranspiration (ETo) among the existing methods is without any discussion. However, the equation requires climatic data that are not always available particularly for a developing country such as Burkina Faso. ETo has been widely used for agricultural water management. Its accurate estimation is vitally important for computerizing crop water balance analysis. Therefore, a previous study has developed a reference model for Burkina Faso (RMBF) for estimating the ETo by using only temperature as input in two production sites, Banfora and Ouagadougou. This paper investigates for the first time in the semiarid environment of Burkina Faso, the potential of using an artificial neural network (ANN) for estimating ETo with limited climatic data set. The ANN model employed in the study was the feed forward backpropagation (BP) type using maximum and minimum air temperature collected from 1996 to 2006. The result of BP was compared to the RMBF, Hargreaves (HRG) and Blaney-Criddle (BCR) which have been successfully used for ETo estimation where there is not sufficient data. Based on the results of this study, it revealed that the BP prediction showed a higher accuracy than RMBF, HRG and BCR. The feed forward backpropagation algorithm could be potentially employed successfully to estimate ETo in semiarid zone.

34 citations


Journal Article
TL;DR: The paper discuses a fresh approach for parallel crawling the web using multiple machines and integrates the trivial issues of crawling also, using a three-step algorithm for page refreshment.
Abstract: In this paper, we put forward a technique for parallel crawling of the web. The World Wide Web today is growing at a phenomenal rate. It has enabled a publishing explosion of useful online information, which has produced the unfortunate side effect of information overload. The size of the web as on February 2007 stands at around 29 billion pages. One of the most important uses of crawling the web is for indexing purposes and keeping web pages up-to-date, later used by search engine to serve the end user queries. The paper puts forward an architecture built on the lines of client server architecture. It discuses a fresh approach for parallel crawling the web using multiple machines and integrates the trivial issues of crawling also. A major part of the web is dynamic and hence, a need arises to constantly update the changed web pages. We have used a three-step algorithm for page refreshment. This checks for whether the structure of a web page has been changed or not, the text content has been altered or whether an image is changed. For The server we have discussed a unique method for distribution of URLs to client machines after determination of their priority index. Also a minor variation to the method of prioritizing URLs on the basis of forward link count has been discussed to accommodate the purpose of frequency of update.

33 citations


Journal Article
TL;DR: A hybrid packet marking algorithm, along with traceback mechanism to find the true origin of the attack traffic is presented in this study and is able to trace back to single packet, nevertheless it requires logging at very few routers and thus incurring insignificant storage overhead on the routers.
Abstract: Detecting and defeating Denial of Service (DoS) attacks is one of the hardest security problems on IP networks Furthermore, spoofing of IP packets makes it difficult to combat against and fix such attacks Packet marking is one of the methods to mitigate the DoS attack that helps traceback to the true origin of the packets A hybrid packet marking algorithm, along with traceback mechanism to find the true origin of the attack traffic is presented in this study The router marks the packets with inbound interface identifier of the router, but the novelty lies on the way it marks the packets The stamping based on modulo technique and reverse modulo for the purpose reconstruction of attack path to traceback to the real source of the packets are proposed The experimental measurements on the presented algorithm ensure that it requires less amount of time to mark and reconstruct the attack graph It is also able to trace back to single packet, nevertheless it requires logging at very few routers and thus incurring insignificant storage overhead on the routers The simulation study and the qualitative comparison with different traceback schemes are also presented to show the performance of the proposed system

33 citations


Journal Article
TL;DR: This paper considers a problem that arises in black box testing: generating small test suites where the combinations that have to be covered are specified by input-output parameter relationships of a software system, and proposes interaction testing, particularly an Orthogonal Array Testing Strategy (OATS).
Abstract: In this paper, we consider a problem that arises in black box testing: generating small test suites (i.e., sets of test cases) where the combinations that have to be covered are specified by input-output parameter relationships of a software system. That is, we only consider combinations of input parameters that affect an output parameter, and we do not assume that the input parameters have the same number of values. To solve this problem, we propose interaction testing, particularly an Orthogonal Array Testing Strategy (OATS) as a systematic, statistical way of testing pair-wise interactions. In software testing process (STP), it provides a natural mechanism for testing systems to be deployed on a variety of hardware and software configurations. The combinatorial approach to software testing uses models to generate a minimal number of test inputs so that selected combinations of input values are covered. The most common coverage criteria are two-way or pairwise coverage of value combinations, though for higher confidence three-way or higher coverage may be required. This paper presents some examples of software-system test requirements and corresponding models for applying the combinatorial approach to those test requirements. The method bridges contributions from mathematics, design of experiments, software test, and algorithms for application to usability testing. Also, this study presents a brief overview of the response surface methods (RSM) for computer experiments available in the literature. The Bayesian approach and orthogonal arrays constructed for computer experiments (OACE) were briefly discussed. An example, of a novel OACE application, to STP optimization study was also given. In this case study, an orthogonal array for computer experiments was utilized to build a second order response surface model. Gradient-based optimization algorithms could not be utilized in this case study since the design variables were discrete valued. Using OACE novel approach, optimum combination of software defect detection techniques choices for every software development phase that maximize all over Defect Detection Effectiveness of STP were determined.

32 citations


Journal Article
TL;DR: This paper proposes a tender/contract-net model for Grid resource allocation, showing the interactions among the involved actors and the performance of the proposed market-based approach is experimentally compared with a round-robin allocation protocol.
Abstract: Grid scheduling, that is, the allocation of distributed computational resources to user applications, is one of the most challenging and complex task in Grid computing The problem of allocating resources in Grid scheduling requires the definition of a model that allows local and external schedulers to communicate in order to achieve an efficient management of the resources themselves To this aim, some economic/market-based models have been introduced in the literature, where users, external schedulers, and local schedulers negotiate to optimize their objectives In this paper, we propose a tender/contract-net model for Grid resource allocation, showing the interactions among the involved actors The performance of the proposed market-based approach is experimentally compared with a round-robin allocation protocol

Journal ArticleDOI
TL;DR: All six universally recognized basic emotions namely angry, disgust, fear, happy, sad and surprise along with neutral one are recognized in this research.
Abstract: This research aims at developing "Humanoid Robots" that can carry out intellectual conversation with human beings. The first step in this direction is to recognize human emotions by a computer using neural network. In this paper all six universally recognized basic emotions namely angry, disgust, fear, happy, sad and surprise along with neutral one are recognized. Various feature extraction techniques such as Discrete Cosine Transform (DCT), Fast Fourier Transform (FFT), Singular Value Decomposition (SVD) are used to extract the useful features for emotion recognition from facial expressions. Support Vector Machine (SVM) is used for emotion recognition using the extracted facial features and the performance of various feature extraction technique is compared. Authors achieved 100% recognition accuracy on training dataset and 94.29% on cross validation dataset.

Journal ArticleDOI
TL;DR: This algorithm can quickly and correctly recognize the number plate from the vehicle image and makes the extraction of the plate independent of color, size and location of number plate.
Abstract: This paper presents a method for recognition of the vehicle number plate from the image using neural nets and mathematical morphology. The main theme is to use different morphological operations in such a way so that the number plate of the vehicle can be extracted efficiently. The method makes the extraction of the plate independent of color, size and location of number plate. The proposed approach can be divided into simple processes, which are, image enhancement, morphing transformation, morphological gradient, combination of resultant images and extracting the number plate from the objects that are left in the image. Then segmentation is applied to recognize the plate using neural network. This algorithm can quickly and correctly recognize the number plate from the vehicle image.

Journal Article
TL;DR: The originality of the method resides in the new technique used to estimate the homography of the plane at infinity by the minimization of a non-linear cost function that is based on a particular motion of the camera "translation and small rotation".
Abstract: In this article, we are interested in the camera self-calibration from three views of a 3-D scene. The originality of our method resides in the new technique used to estimate the homography of the plane at infinity by the minimization of a non-linear cost function that is based on a particular motion of the camera "translation and small rotation". Our approach also permits to calculate the camera parameters and the depths of interest points detected in the images. Experimental results demonstrate the performance of our algorithms, in term of precision and convergence.

Journal Article
TL;DR: In this paper, the authors used ANNs to estimate the suspended sediment concentration (SSC) in the Jiasian diversion weir in southern Taiwan during the storm events and found that the ANN models are more reliable than classical regression method for estimating the SSC in the area studied herein.
Abstract: This paper is concerned with monitoring the hourly event-based river suspended sediment concentration (SSC) due to storms in Jiasian diversion weir in southern Taiwan. The weir is built for supplying 0.3 million tons of water per day averagely for civil and industrial use. Information of suspended sediments fluxes of rivers is crucial for monitoring the quality of water. The issue of water quality is of particular importance to Jiasian area where there are high population densities and intensive agricultural activities. Therefore, this study explores the potential of using artificial neural networks (ANNs) for modeling the event-based SSC for continuous monitoring of the river water quality. The data collected include the hourly water discharge, turbidity and SSC during the storm events. The feed forward backpropagation network (BP), generalized regression neural network (GRNN), and classical regression were employed to test their performances. From the statistical evaluation, it has been found that the performance of BP was slightly better than GRNN model. In addition, the classical regression performance was inferior to ANNs. Statistically, it appeared that both BP (r2=0.930) and GRNN (r2=0.927) models fit well for estimating the event-based SSC in the Jiasian diversion weir. The weir SSC estimation using a single input data with the neural networks showed the dominance of the turbidity variable over water discharge. Furthermore, using the the ANN models are more reliable than classical regression method for estimating the SSC in the area studied herein.

Journal Article
TL;DR: In order to design UML models, a meta-modelling approach is explored and a transformation of that kind using the Arena simulation environment is discussed.
Abstract: While developing new business systems and reengineering already existing ones, many organizations use the Unified Modelling Language (UML) to design a system's structure and describe system's behaviour. In spite of describing system's behaviour with the UML model, the model itself is static. UML does not provide a possibility of running the model and studying the system's behaviour. In such a situation the necessity arises to simulate the UML model. To provide that possibility, the designed UML diagrams could be transformed into a simulation model to be run within a specialized simulation environment. The paper discusses a transformation of that kind using the Arena simulation environment. In order to design UML models, a meta-modelling approach is explored.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the problem of detecting and defeating Denial of Service (DoS) attacks on IP networks and how spoofing of IP packets makes it difficult to combat against and fix such attacks.
Abstract: Detecting and defeating Denial of Service (DoS) attacks is one of the hardest security problems on IP networks. Furthermore, spoofing of IP packets makes it difficult to combat against and fix such...

Journal Article
TL;DR: The aim of this paper is to determine the level of compliance of AGIT model, developed during previous research for measuring Scrum-based software development, with the information systems auditing criteria, using COBIT model.
Abstract: The aim of this paper is to determine the level of compliance of AGIT model, developed during our previous research for measuring Scrum-based software development, with the information systems auditing criteria. For this purpose we use COBIT model. After a short introduction of Scrum, AGIT and COBIT, we perform comparison analysis of their indicators for software development. Then we upgrade AGIT model with the selected COBIT indicators. In order to improve the clarity of the model, we present its structure using IT Balanced Scorecard. Finally we suggest possible further research.

Journal Article
TL;DR: A valid computer-aided visual defect inspection system is contributed to help meet the quality control needs of LED chip manufacturers and outperforms other methods.
Abstract: Automated visual inspection, a crucial manufacturing step, has been replacing the more time-consuming and less accurate human inspection. This research explores automated visual inspection of surface defects in a light-emitting diode (LED) chip. Commonly found on chip surface are water-spot blemishes which impair the appearance and functionality of LEDs. Automated inspection of water-spot defects is difficult because they have a semi-opaque appearance and a low intensity contrast with the rough exterior of the LED chip. Moreover, the defect may fall across two different background textures, which further increases detection difficulties. The one-level Haar wavelet transform is first used to decompose a chip image and extract four wavelet characteristics. Then, wavelet-based principal component analysis (WPCA) and Hotelling statistic (WHS) approaches are respectively applied to integrate the multiple wavelet characteristics. Finally, the principal component analysis of WPCA and the Hotelling control limit of WHS individually judge the existence of defects. Experimental results show that the proposed WPCA method achieves detection rates of above 93.8% and false alarm rates of below 3.6%, and outperforms other methods. A valid computer-aided visual defect inspection system is contributed to help meet the quality control needs of LED chip manufacturers.

Journal Article
TL;DR: The paper shortly presents the situation of the Romanian universities regarding information systems implementation and deployment.
Abstract: The paper shortly presents the situation of the Romanian universities regarding information systems implementation and deployment. The information presented is the result of a study regarding the current state of the Romanian universities in the process of data and information system integration, performed at the end of 2007 in 35 accredited universities. This study was used as a base for identifying and analyzing the main factors of influence for developing an integrated university environment and for identifying concrete action directions for accomplishing that integration.

Journal Article
TL;DR: An overview of steganography on GIF image format is presented in order to explore the potential of GIF in information hiding research and the enhancement of the Least Significant Bits (LSB) insertion techniques from the most basic and conventional 1 bit to the LSB colour cycle method is explained.
Abstract: Protected and encrypted data sent electronically is vulnerable to various attacks such as spyware and attempts in breaking and revealing the data. Thus, steganography was introduced to conceal a secret message into an unsuspicious cover medium so that it can be sent safely through a public communication channel. Suspicion becomes the significant key determinant in the field of steganography. In other words, an efficient stegnographic algorithm will not cause any suspicion after the hidden data is embedded. This paper presents an overview of steganography on GIF image format in order to explore the potential of GIF in information hiding research. A platform, namely StegCure is proposed by using an amalgamation of three different Least Significant Bit (LSB) insertion algorithms that is able to perform steganographic methods. This paper explains about the enhancement of the Least Significant Bits (LSB) insertion techniques from the most basic and conventional 1 bit to the LSB colour cycle method. Various kinds of existing steganographic methods are discussed and some inherent problems are highlighted along with some issues on existing solutions. In comparison with the other data hiding applications, StegCure is a more comprehensive security utility where it offers user-friendly functionality with interactive graphic user interface and integrated navigation capabilities. Furthermore, in order to sustain a higher level of security, StegCure has implemented a Public Key Infrastructure (PKI) mechanism at both sender and receiver sites. With this feature, StegCure manages to restrict any unauthorized user from retrieving the secret message through trial and error. Besides, we also highlight a few aspects in LSB methods on image steganography. At the end of the paper, the evaluation results of the hybrid method in StegCure are presented. The future work will be focused in assimilation of more diversified methods into a whole gamut of steganography systems and its robustness towards steganalysis.

Journal Article
TL;DR: The generation of the test suite for basis path testing of WS-BPEL and an accompanying tool that can be used by service testers are discussed and a testing process for service testers is presented.
Abstract: Web services technology offers a WS-BPEL language for business process execution. The building block of WS-BPEL is those Web services components that collaborate to realize a certain function of the business process. Applications can now be built more easily by composing existing Web services into workflows; each workflow itself is also considered a composite Web service. As with other programs, basis path testing can be conducted on WS-BPEL processes in order to verify the execution of every node of the workflow. This paper discusses the generation of the test suite for basis path testing of WS-BPEL and an accompanying tool that can be used by service testers. The test suite consists of test cases, stubs of the constituent services in the workflow, and auxiliary state services that assist in the test; these are deployed when running a test on a particular WS-BPEL. The paper presents also a testing process for service testers. A business process of a market place is discussed as a case study.

Journal ArticleDOI
Nancy P. Lin1, Chung-I Chang1, Hao-En Chueh1, Hung-Jen Chen1, Wei-Hua Hao1 
TL;DR: The main idea of DGD algorithm is to deflect the original grid structure in each dimension of the data space after the clusters generated from this original structure have been obtained, which can be considered a dynamic adjustment of the size of the original cells.
Abstract: The grid-based clustering algorithm, which partitions the data space into a finite number of cells to form a grid structure and then performs all clustering operations on this obtained grid structure, is an efficient clustering algorithm, but its effect is seriously influenced by the size of the cells. To cluster efficiently and simultaneously, to reduce the influences of the size of the cells, a new grid-based clustering algorithm, called DGD, is proposed in this paper. The main idea of DGD algorithm is to deflect the original grid structure in each dimension of the data space after the clusters generated from this original structure have been obtained. The deflected grid structure can be considered a dynamic adjustment of the size of the original cells, and thus, the clusters generated from this deflected grid structure can be used to revise the originally obtained clusters. The experimental results verify that, indeed, the effect of DGD algorithm is less influenced by the size of the cells than other grid-based ones.

Journal Article
TL;DR: Two very efficient combinatorial Monte Carlo models are presented for evaluating network reliability importance measures and they are shown to be very efficient and simple to understand.
Abstract: In this paper we focus on computational aspects of network reliability importance measure evaluation It is a well known fact that most network reliability problems are NP-hard and therefore there is a significant gap between theoretical analysis and the ability to compute different reliability parameters for large or even moderate networks In this paper we present two very efficient combinatorial Monte Carlo models for evaluating network reliability importance measures

Journal ArticleDOI
TL;DR: This paper presents an overview of the methodologies and algorithms for segmenting 2D images as a means in detecting target objects embedded in visual images for an Automatic Target Detection application.
Abstract: This paper presents an overview of the methodologies and algorithms for segmenting 2D images as a means in detecting target objects embedded in visual images for an Automatic Target Detection application.

Journal ArticleDOI
TL;DR: In this article, an Optical Character Recognition (OCR) system for printed text documents in Kannada, a South Indian language, was described, where the level of accuracy reached to 100%.
Abstract: This paper describes an Optical Character Recognition (OCR) system for printed text documents in Kannada, a South Indian language. The proposed OCR system for the recognition of printed Kannada text, which can handle all types of Kannada characters. The system first extracts image of Kannada scripts, then from the image to line segmentation then segments the words into sub-character level pieces. For character recognition we have used database approach. The level of accuracy reached to 100%.

Journal Article
TL;DR: It is proved that 4×4.
Abstract: The performance and fault tolerance are two very crucial factors in designing interconnection networks for a multiprocessor system. A new type of MIN, Fault Tolerant Advanced Omega network, which using 4×4. switches, is proposed on the basis of the idea of the Delta network and the Omega network. In this paper, it is proved that 4×4. switches have the better performance/cost ratios than 2×2. switches based on the current level of the VLSI technology. This paper expounds its topological properties and routing algorithm and makes performance/cost ratios comparisons. The mathematical analysis approach is used here to find the probability of acceptance and bandwidth with change in traffic. Furthermore reliability analysis of Fault-Tolerant Advanced Omega Network (FTAON) is discussed in detail, It is seen that FTAON is more reliable and cost effective than other previously proposed MINs of similar class. It has been also observed that it has fault tolerant and nonblocking capability in complex parallel systems.

Journal Article
TL;DR: The results and comparisons between empirical and simulated data are intended to assist in the design, future studies and deployment of WSNs in the real world.
Abstract: Wireless Sensor Networks (WSNs) is an important field of study as more and more applications are enhancing daily life The technology trend is to achieve small-sized, cheap, and power efficient sensor nodes, which will make the system reliable and efficient The Crossbow Technologies MICAz mote is an example used in this paper Measurements of its propagation characteristics in a realistic environment will help the deployment and installation of these motes to form a WSN The CST Microwave Studio is used to build a simulation of the MICAz The results and comparisons between empirical and simulated data are intended to assist in the design, future studies and deployment of WSNs in the real world

Journal Article
TL;DR: This paper introduces an example of a new concept on Computer Aided Instruction (CAI) resources, i.e. a tutorial designed under eMathTeacher philosophy for active eLearning Mamdani's Direct Method, and presents a brief survey on available CAI resources discussing what their influence over students' behaviour is.
Abstract: An eMathTeacher [16] is an eLearning on-line self-assessment tool that helps users to active learning math concepts and algorithms by themselves, correcting their mistakes and providing them with clues to find the right solution. This paper introduces an example of a new concept on Computer Aided Instruction (CAI) resources, i.e. a tutorial designed under eMathTeacher philosophy for active eLearning Mamdani's Direct Method, and presents a brief survey on available CAI resources discussing what their influence over students' behaviour is. It also describes the minimum and complementary requirements an eLearning tool must fulfil to be considered an eMathTeacher as well as the main contributions of this kind of tutorials to the learning processes. Needless to say that, such features as interactivity, visualization and simplicity turn these tools into great value pedagogical instruments.

Journal Article
TL;DR: An ontology-based service-oriented approach to problem-solving in e- government is proposed in the Semantic Grid, enabling to provide, in an open, dynamic, loosely coupled and scalable manner, the service publication, discovery and reuse for connecting the customers and agencies of e-government services based on their semantic similarities in terms of problem-Solving capabilities.
Abstract: Nowadays, there is a growing number of e-government portals and solutions that provide integrated governmental e-services to the customers (citizens, enterprises or other public sectors). However, the administration and interoperability of distributed e-government nodes are faced with increasing challenges caused by the service-oriented modeling difficulties and ontological issues in distributed computing, resource integration and knowledge sharing over heterogeneous computing platforms. To overcome this, a Semantic Grid infrastructure is presented in this paper for distributed management of e-government resources across ubiquitous virtual governmental agencies. An ontology-based service-oriented approach to problem-solving in e-government is proposed in the Semantic Grid, enabling to provide, in an open, dynamic, loosely coupled and scalable manner, the service publication, discovery and reuse for connecting the customers and agencies of e-government services based on their semantic similarities in terms of problem-solving capabilities. The operation of the system is demonstrated using Protege-2000, a widely accepted ontology modeling tool to validate the implementation of the proposed approach towards effective ontological maintenance.