scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Computer Science in 2007"


Journal ArticleDOI
TL;DR: An imperceptible and a robust combined DWT-DCT digital image watermarking algorithm that watermarks a given digital image using a combination of the Discrete Wavelet Transform (DWT) and thediscrete Cosine transform (DCT).
Abstract: The proliferation of digitized media due to the rapid growth of networked multimedia systems, has created an urgent need for copyright enforcement technologies that can protect copyright ownership of multimedia objects. Digital image watermarking is one such technology that has been developed to protect digital images from illegal manipulations. In particular, digital image watermarking algorithms which are based on the discrete wavelet transform have been widely recognized to be more prevalent than others. This is due to the wavelets' excellent spatial localization, frequency spread, and multi-resolution characteristics, which are similar to the theoretical models of the human visual system. In this paper, we describe an imperceptible and a robust combined DWT-DCT digital image watermarking algorithm. The algorithm watermarks a given digital image using a combination of the Discrete Wavelet Transform (DWT) and the Discrete Cosine Transform (DCT). Performance evaluation results show that combining the two transforms improved the performance of the watermarking algorithms that are based solely on the DWT transform.

319 citations


Journal ArticleDOI
TL;DR: This system shows a high classification effectiveness for Arabic data set in term of F-measure (F=88.11) and uses CHI square method as a feature selection method in the pre-processing step of the Text Classification system design procedure.
Abstract: This paper aims to implement a Support Vector Machines (SVMs) based text classification system for Arabic language articles. This classifier uses CHI square method as a feature selection method in the pre-processing step of the Text Classification system design procedure. Comparing to other classification methods, our system shows a high classification effectiveness for Arabic data set in term of F-measure (F=88.11).

235 citations


Journal ArticleDOI
TL;DR: Routing protocols used in wired network cannot be used for mobile ad-hoc networks because of node mobility, so these protocols are divided into two classes: table driven and demand based.
Abstract: Mobile ad hoc networks(MANET) represent complex distributed systems that comprise wireless mobile nodes that can freely and dynamically self organize into arbitrary and temporary ad-hoc network topologies, allowing people and devices to seamlessly internet work in areas with no preexisting communication infrastructure e.g., disaster recovery environments. An ad-hoc network is not a new one, having been around in various forms for over 20 years. Traditionally, tactical networks have been the only communication networking application that followed the ad-hoc paradigm. Recently the introduction of new technologies such as Bluetooth, IEEE 802.11 and hyperlan are helping enable eventual commercial MANET deployments outside the military domain. These recent revolutions have been generating a renewed and growing interest in the research and development of MANET. To facilitate communication within the network a routing protocol is used to discover routes between nodes. The goal of the routing protocol is to have an efficient route establishment between a pair of nodes, so that messages can be delivered in a timely manner. Bandwidth and power constraints are the important factors to be considered in current wireless network because multi-hop ad-hoc wireless relies on each node in the network to act as a router and packet forwarder. This dependency places bandwidth, power computation demands on mobile host to be taken into account while choosing the protocol. Routing protocols used in wired network cannot be used for mobile ad-hoc networks because of node mobility. The ad-hoc routing protocols are divided into two classes: table driven and demand based. This paper reviews and discusses routing protocol belonging to each category.

205 citations


Journal ArticleDOI
TL;DR: The algorithm can embed efficiently a large amount of data that has been reached to 75% of the image size with high quality of the output and make comparison with the previous Steganography algorithms like S-Tools.
Abstract: This study deals with constructing and implementing new algorithm based on hiding a large amount of data (image, audio, text) file into color BMP image. We have been used adaptive image filtering and adaptive image segmentation with bits replacement on the appropriate pixels. These pixels are selected randomly rather than sequentially by using new concept defined by main cases with their sub cases for each byte in one pixel. This concept based on both visual and statistical. According to the steps of design, we have been concluded 16 main cases with their sub cases that cover all aspects of the input data into color bitmap image. High security layers have been proposed through three layers to make it difficult to break through the encryption of the input data and confuse steganalysis too. Our results against statistical and visual attacks are discussed and make comparison with the previous Steganography algorithms like S-Tools. We show that our algorithm can embed efficiently a large amount of data that has been reached to 75% of the image size with high quality of the output.

172 citations


Journal ArticleDOI
TL;DR: It was found that ML performed the best followed by ANN, DT and SAM with accuracies of 86%, 84%, 51% and 49% respectively.
Abstract: Several classification algorithms for pattern recognition had been tested in the mapping of tropical forest cover using airborne hyperspectral data. Results from the use of Maximum Likelihood (ML), Spectral Angle Mapper (SAM), Artificial Neural Network (ANN) and Decision Tree (DT) classifiers were compared and evaluated. It was found that ML performed the best followed by ANN, DT and SAM with accuracies of 86%, 84%, 51% and 49% respectively.

170 citations


Journal ArticleDOI
TL;DR: The analysis and experiments show that the PETS algorithm substantially outperforms the existing scheduling algorithms such as Heterogeneous Earliest Finish Time (HEFT), Critical-Path-On a Processor (CPOP) and Levelized Min Time (LMT), in terms of schedule length ratio, speedup, efficiency, running time and frequency of best results.
Abstract: A heterogeneous computing environment is a suite of heterogeneous processors interconnected by high-speed networks, thereby promising high speed processing of computationally intensive applications with diverse computing needs. Scheduling of an application modeled by Directed Acyclic Graph (DAG) is a key issue when aiming at high performance in this kind of environment. The problem is generally addressed in terms of task scheduling, where tasks are the schedulable units of a program. The task scheduling problems have been shown to be NP-complete in general as well as several restricted cases. In this study we present a simple scheduling algorithm based on list scheduling, namely, low complexity Performance Effective Task Scheduling (PETS) algorithm for heterogeneous computing systems with complexity O (e) (p+ log v), which provides effective results for applications represented by DAGs. The analysis and experiments based on both randomly generated graphs and graphs of some real applications show that the PETS algorithm substantially outperforms the existing scheduling algorithms such as Heterogeneous Earliest Finish Time (HEFT), Critical-Path-On a Processor (CPOP) and Levelized Min Time (LMT), in terms of schedule length ratio, speedup, efficiency, running time and frequency of best results.

144 citations


Journal ArticleDOI
TL;DR: A dynamic tree-based model to represent Grid architecture in order to manage workload is proposed and a hierarchical load balancing strategy and associated algorithms based on neighbourhood propriety are developed to decrease the amount of messages exchanged between Grid resources.
Abstract: Workload and resource management are two essential functions provided at the service level of the Grid software infrastructure. To improve the global throughput of these environments, effective and efficient load balancing algorithms are fundamentally important. Most strategies were developed in mind, assuming homogeneous set of resources linked with homogeneous and fast networks. However for computational Grids we must address main new challenges, like heterogeneity, scalability and adaptability. Our contributions in this perspective are two fold. First we propose a dynamic tree-based model to represent Grid architecture in order to manage workload. This model was characterized by three main features: (i) it was hierarchical; (ii) it supports heterogeneity and scalability; and (iii) it was totally independent from any Grid physical architecture. Second, we develop a hierarchical load balancing strategy and associated algorithms based on neighbourhood propriety. The main benefit of this idea was to decrease the amount of messages exchanged between Grid resources. As consequence, the communication overhead induced by tasks transferring and flow information was reduced. In order to evaluate the practicability and performance of our strategy we have developed a Grid simulator in Java. The first results of our experimentations were very promising. We have realized a significant improvement in mean response time with a reduction of communication cost. It means that the proposed model can lead to a better load balancing between resources without high overhead.

124 citations


Journal ArticleDOI
TL;DR: This study has used Functional Link Artificial Neural Networks (FLANN) for the task of classification, which uses a single layer feed-forward network and overcomes the non-linearity nature of problems, which is commonly encountered in single layer networks.
Abstract: In solving classification task of data mining, the traditional algorithm such as multi-layer perceptron takes longer time to optimize the weight vectors. At the same time, the complexity of the network increases as the number of layers increases. In this study, we have used Functional Link Artificial Neural Networks (FLANN) for the task of classification. In contrast to multiple layer networks, FLANN architecture uses a single layer feed-forward network. Using the functionally expanded features FLANN overcomes the non-linearity nature of problems, which is commonly encountered in single layer networks. The features like simplicity of designing the architecture and low-computational complexity of the networks encourages us to use it in data mining task. An extensive simulation study is presented to demonstrate the effectiveness of the classifier.

114 citations


Journal ArticleDOI
TL;DR: Neuro fuzzy technique shows that MRI brain tumor segmentation using HSOM-FCM also perform more accurate one, and the lowest level weight vector is achieved by the abstraction level.
Abstract: Implementation of a neuro-fuzzy segmentation process of the MRI data is presented in this study to detect various tissues like white matter, gray matter, csf and tumor. The advantage of hierarchical self organizing map and fuzzy c means algorithms are used to classify the image layer by layer. The lowest level weight vector is achieved by the abstraction level. We have also achieved a higher value of tumor pixels by this neuro-fuzzy approach. The computation speed of the proposed method is also studied. The multilayer segmentation results of the neuro fuzzy are shown to have interesting consequences from the viewpoint of clinical diagnosis. Neuro fuzzy technique shows that MRI brain tumor segmentation using HSOM-FCM also perform more accurate one.

78 citations


Journal ArticleDOI
TL;DR: The accuracy of the event classification process is significantly improved using the proposed approach for reducing the missing- alarm using the use of recursive Log-likelihood and entropy estimation as a measure for monitoring model degradation related with behavior changes and the associated model update.
Abstract: Intrusion detection systems (IDSs) have been widely used to overcome security threats in computer networks. Anomaly-based approaches have the advantage of being able to detect previously unknown attacks, but they suffer from the difficulty of building robust models of acceptable behaviour which may result in a large number of false alarms caused by incorrect classification of events in current systems. We propose a new approach of an anomaly Intrusion detection system (IDS). It consists of building a reference behaviour model and the use of a Bayesian classification procedure associated to unsupervised learning algorithm to evaluate the deviation between current and reference behaviour. Continuous re-estimation of model parameters allows for real time operation. The use of recursive Log-likelihood and entropy estimation as a measure for monitoring model degradation related with behavior changes and the associated model update show that the accuracy of the event classification process is significantly improved using our proposed approach for reducing the missing- alarm.

54 citations


Journal ArticleDOI
TL;DR: This work overviews some recently proposed discrete Fourier transform (DFT)- and discrete wavelet packet transform (DWPT)-based speech parameterization methods and compares their performance against traditional techniques, such as the Mel-frequency cepstral coefficients (MFCC) and perceptual linear predictive (PLP), which presently dominate the speech recognition field.
Abstract: In the present work we overview some recently proposed discrete Fourier transform (DFT)- and discrete wavelet packet transform (DWPT)-based speech parameterization methods and evaluate their performance on the speech recognition task. Specifically, in order to assess the practical value of these less studied speech parameterization methods, we evaluate them in a common experimental setup and compare their performance against traditional techniques, such as the Mel-frequency cepstral coefficients (MFCC) and perceptual linear predictive (PLP) cepstral coefficients which presently dominate the speech recognition field. In particular, utilizing the well established TIMIT speech corpus and employing the Sphinx-III speech recognizer, we present comparative results of 8 different speech parameterization techniques.

Journal ArticleDOI
TL;DR: Design and prototyping of a voice-based door access control system for building security and Experimental result confirms the effectiveness of the proposed intelligent voice- based door access Control system based on the false acceptance rate and false rejection rate.
Abstract: Secure buildings are currently protected from unauthorized access by a variety of devices. Even though there are many kinds of devices to guarantee the system safety such as PIN pads, keys both conventional and electronic, identity cards, cryptographic and dual control procedures, the people voice can also be used. The ability to verify the identity of a speaker by analyzing speech, or speaker verification, is an attractive and relatively unobtrusive means of providing security for admission into an important or secured place. An individual’s voice cannot be stolen, lost, forgotten, guessed, or impersonated with accuracy. Due to these advantages, this paper describes design and prototyping a voice-based door access control system for building security. In the proposed system, the access may be authorized simply by means of an enrolled user speaking into a microphone attached to the system. The proposed system then will decide whether to accept or reject the user’s identity claim or possibly to report insufficient confidence and request additional input before making the decision. Furthermore, intelligent system approach is used to develop authorized person models based on theirs voice. Particularly Adaptive-Network-based Fuzzy Inference Systems is used in the proposed system to identify the authorized and unauthorized people. Experimental result confirms the effectiveness of the proposed intelligent voice-based door access control system based on the false acceptance rate and false rejection rate.

Journal ArticleDOI
TL;DR: A multi-agent framework for load balancing in heterogeneous cluster is given, types of agents along with policies needed to meet the requirements of the proposed load-balancing, and preliminary experimental results demonstrated that the proposed framework is effective than the existing ones.
Abstract: Distributed Dynamic load balancing (DDLB) is an important system function destined to distribute workload among available processors to improve throughput and/or execution times of parallel computer in Cluster Computing. Instead of balancing the load in cluster by process migration, or by moving an entire process to a less loaded computer, we make an attempt to balance load by splitting processes into separate jobs and then balance them to nodes. In order to get target, we use mobile agent (MA) to distribute load among nodes in a cluster. In this study, a multi-agent framework for load balancing in heterogeneous cluster is given. Total load on node is calculated using queue length which is measured as the total number of processes in queue. We introduce types of agents along with policies needed to meet the requirements of the proposed load-balancing. Different metrics are used to compare load balancing mechanism with the existing message passing technology. The experiment is carried out on cluster of PC's divided into multiple LAN's using PMADE (Platform for Mobile agent distribution and execution). Preliminary experimental results demonstrated that the proposed framework is effective than the existing ones.

Journal ArticleDOI
TL;DR: This study focuses on introducing a new metric, Aggregate Interface Queue Length (AIQL), in AODV in order to deal with load balancing issues and performance evaluation through simulation shows that the modified code can perform better than the conventional A ODV.
Abstract: AODV is a prominent routing protocol for MANET that uses hop count as a path selection metric However, AODV has no means to convey traffic load on current route This study focuses on introducing a new metric, Aggregate Interface Queue Length (AIQL), in AODV in order to deal with load balancing issues Performance evaluation through simulation shows that the modified code can perform better than the conventional AODV We also evaluate the effect of interface queue length on normalized routing load, average throughput and average end-to-end delay

Journal ArticleDOI
TL;DR: This paper focuses on using SMS for answering 'short words-answers' types of questions and evaluating them using simple matching process, providing enough feedback.
Abstract: Emerging technologies are leading to the development of several new opportunities to guide and enhance learning that were unimaginable a few years ago. The move towards adopting mobile learning technologies is fast growing in both academic and industrial sectors. Mobile learning uses portable devices linked to a commercial public network, including different types of mobile phones and handheld computers. For mobile users, as well in all mobile applications, SMS messaging is found to be the most useful and convenient way of communication technology. In case of mobile learning, there are only limited forms for conducting tests using true/false method, multiple choice selection method etc. Answering short questions is a better way of testing the students to get more details about a particular entity. The practice of messaging can be thought of highly useful for answering such short-answer questions. The case where answers are to be given as short messages to the assessors, evaluating them may not be that much easier when compared to other simple types of tests. This paper focuses on using SMS for answering 'short words-answers' types of questions and evaluating them using simple matching process, providing enough feedback.

Journal ArticleDOI
TL;DR: This study presents a cryptanalysis method based on Genetic Algorithm and Tabu Search to break a Mono-Alphabetic Substitution Cipher in Adhoc networks and compares and analyzed the performance of these algorithms in automated attacks on Mono-alphabetic Substitutes Cipher.
Abstract: With exponential growth of networked system and application such as e-Commerce, the demand for effective Internet security is increasing. Cryptology is the science and study of systems for secret communication. In consists of two complementary fields of study: cryptography and cryptanalysis. This study presents a cryptanalysis method based on Genetic Algorithm and Tabu Search to break a Mono-Alphabetic Substitution Cipher in Adhoc networks. We have also compared and analyzed the performance of these algorithms in automated attacks on Mono-alphabetic Substitution Cipher. The use of Tabu search is largely an unexplored area in the field of Cryptanalysis. A generalized version of these algorithms can be used for attacking other ciphers as well.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the new ant-based algorithm AntClust was able to extract the correct number of clusters with good clustering quality compared to the results obtained from a classical clustering algorithm like Kmeans algorithm.
Abstract: A collective approach to resolve the segmentation problem was proposed. AntClust is a new ant-based algorithm that uses the self-organizing and autonomous brood sorting behavior observed in real ants. Ants and pixels are scatted on a discrete array of cells represented the ants’ environment. Using simple local rules and without any central control, ants form homogeneous clusters by moving pixels from the cells of the array according to a local similarity function. The initial knowledge of the number of clusters and initial partition were not needed during the clustering process. Experimental results conducted on synthetic and real images demonstrate that our algorithm AntClust was able to extract the correct number of clusters with good clustering quality compared to the results obtained from a classical clustering algorithm like Kmeans algorithm.

Journal ArticleDOI
TL;DR: The results of the computer simulation prove that the injected harmonics are greatly reduced, system efficiency and power factor are improved and fuzzy control applicable for active power filter for three-phase systems, which are comprised of nonlinear loads.
Abstract: Harmonic distortion is a form of electrical noise. It is a superposition of signals, which are of multiples of fundamental frequency. Proliferation of large power electronic systems results in increased harmonic distortion. Harmonic distortion results in reduction of power quality and system stability. This paper presents fuzzy control applicable for active power filter for three-phase systems, which are comprised of nonlinear loads. The active filter is based on a three-phase inverter with six controllable switches. The AC side of the inverter is connected in parallel with the other nonlinear loads through a filter inductance. The DC side of the inverter is connected to a filter capacitor. The Fuzzy Controller (FC) is used to shape the current through the filter inductor such that the line current is in phase with and of the same shape as the input voltage. The results of the computer simulation prove that the injected harmonics are greatly reduced, system efficiency and power factor are improved.

Journal ArticleDOI
TL;DR: A pre-processing technique for reducing the size and enhancing the quality of USF and MIAS mammogram images is introduced and 87% reduction in size is obtained with no loss of data at the breast region.
Abstract: High quality mammogram images are high resolution and large size images. Processing these images require high computational capabilities. The transmission of these images over the net is sometimes critical especially if the diagnosis of remote radiologists is required. In this paper, a pre-processing technique for reducing the size and enhancing the quality of USF and MIAS mammogram images is introduced. The algorithm analyses the mammogram image to determine if 16-bit to 8-bit conversion process is required. Enhancement is applied later followed by a scaling process to reduce the mammogram size. The performances of the algorithms are evaluated objectively and subjectively. On average 87% reduction in size is obtained with no loss of data at the breast region.

Journal ArticleDOI
TL;DR: A novel QoS routing algorithm called Swarm-based Distance Vector Routing based on ant colony optimization is proposed to support delay, jitter and energy constraints in Quality of Service support for Mobile Ad-hoc Networks.
Abstract: Quality of Service support for Mobile Ad-hoc Networks is a challenging task due to dynamic topology and limited resource. The main purpose of QoS routing is to find a feasible path that has sufficient resources to satisfy the constraints. A fundamental problem in QoS routing is to find a path between a source and destination that satisfies two or more end-to-end QoS constraints. In this paper a novel QoS routing algorithm called Swarm-based Distance Vector Routing based on ant colony optimization is proposed to support delay, jitter and energy constraints. The simulation results of SDVR are compared with the reactive routing protocol Adhoc On demand Distance Vector routing. SDVR produces better performance than AODV in terms of packet delivery ratio, throughput, end-to-end delay, energy, and jitter.

Journal ArticleDOI
TL;DR: A block cipher by taking a large key matrix of size nxn and a plaintext matrix containing n rows and two columns, which clearly indicates that the cipher cannot be broken by any cryptanalytic attack.
Abstract: In this research, we have developed a block cipher by taking a large key matrix of size nxn and a plaintext matrix containing n rows and two columns. In this, the plaintext column vectors, perated by the key matrix are thoroughly interlaced at each stage of the iteration. As a typical example, we have taken the key in the form an 8×8 matrix and the plaintext in the form of an 8×2 matrix. Here the key is of the size 384 binary bits and the plaintext is of size 112 binary bits. The cryptanalysis carried out in this research clearly indicates that the cipher cannot be broken by any cryptanalytic attack.

Journal ArticleDOI
TL;DR: A system for recognizing offline handwritten Tamil characters using Support Vector Machine (SVM) has achieved a very good recognition rate of 87.4% on the totally unconstrained handwritten Tamil character database.
Abstract: This study describes a system for recognizing offline handwritten Tamil characters using Support Vector Machine (SVM). Data samples are collected from different writers on A4 sized documents. They are scanned using a flat bed scanner at a resolution of 300 dpi and stored as grey scale images. Various preprocessing operations are performed on the digitized image to enhance the quality of the image. Random sized preprocessed image is normalized to uniform sized image. Pixel densities are calculated for different zones of the image and these values are used as the features of a character. These features are used to train and test the support vector machine. The support vector machine is tested for the first time for recognizing handwritten Tamil characters. The recognition results are tested for 3 different standard sizes of 32X32, 48X48 and 64X64. Pixel densities are calculated for various zones and also for overlapping zones of the 64X64 sized image. Best results are obtained for 64X64 sized normalized image with overlapping windows. The handwriting system is trained for 106 different characters and test results are given for 34 different Tamil characters. With a simple feature of pixel density, the system has achieved a very good recognition rate of 87.4% on the totally unconstrained handwritten Tamil character database.

Journal ArticleDOI
TL;DR: The novelty of this paper lies in the application of neural network control algorithms such as model reference control and Nonlinear Autoregressive-Moving Average (NARMA)–L2 control to generate switching signals for the series compensator of the UPQC system.
Abstract: Power quality is an important measure of the performance of an electrical power system. This paper discusses the topology, control strategies using artificial intelligent based controllers and the performance of a unified power quality conditioner for power quality improvement. UPQC is an integration of shunt and series compensation to limit the harmonic contamination within 5 %, the limit imposed by IEEE-519 standard. The novelty of this paper lies in the application of neural network control algorithms such as model reference control and Nonlinear Autoregressive-Moving Average (NARMA)–L2 control to generate switching signals for the series compensator of the UPQC system. The entire system has been modeled using MATLAB 7.0 toolbox. Simulation results demonstrate the applicability of MRC and NARMA-L2 controllers for the control of UPQC.

Journal ArticleDOI
TL;DR: The present paper proposes a method of texture classification based on long linear patterns using sum of occurrence of grain components of textures for feature extraction, and results indicated good analysis, and how the classification of textures will be effected with different longlinear patterns.
Abstract: The present paper proposes a method of texture classification based on long linear patterns. Linear patterns of long size are bright features defined by morphological properties: linearity, connectivity, width and by a specific Gaussian-like profile whose curvature varies smoothly along the crest line. The most significant information of a texture often appears in the occurrence of grain components. That’s why the present paper used sum of occurrence of grain components for feature extraction. The features are constructed from the different combination of long linear patterns with different orientations. These features offer a better discriminating strategy for texture classification. Further, the distance function captured from the sum of occurrence of grain components of textures is expected to enhance the class seperability power. The class seperability power of these features is investigated in the classification experiments with arbitrarily chosen texture images taken from the Brodatz album. The experimental results indicated good analysis, and how the classification of textures will be effected with different long linear patterns.

Journal ArticleDOI
TL;DR: The approach implemented in this paper is hybrid, where the wavelet transform and neural networks are used together to form a system with improved performance.
Abstract: In speaker identification systems, a database is constructed from the speech samples of known speakers. The approach implemented in this paper is hybrid, where the wavelet transform and neural networks are used together to form a system with improved performance. Features are extracted by applying a discrete wavelet transform (DWT), while a neural network (NN) is used for formulating the system database and for handling the task of decision making. The neural network is trained using inputs, which are the feature vectors. A criteria depends on both false acceptance ratio (FAR) and false rejection ratio (FRR) is used to evaluate the system performance. For experimenting the proposed system, a set of 25 randomly aged male and female speakers was used. Results of admitting the members of this set to a secure system were computed and presented. The evaluation criteria parameters obtained are; FAR=14.5% and FRR=24.5%

Journal ArticleDOI
TL;DR: SVD Transformation with Naive Bayes scheme has outperformed all other approaches and shows better results than the existing approach (LSA) being used by some open source code repositories e.g. Sourceforge.
Abstract: Reuse repositories manager manages the reusable software components in different categories and needs to find the category of reusable software components. In this paper, we have used different pure and hybrid approaches to find the domain relevancy of the component to a particular domain. Probabilistic Latent Semantic Analysis (PLSA) approach, LSA, Singular Value Decomposition (SVD) technique, LSA Semi-Discrete Matrix Decomposition (SDD) technique and Naive Bayes Approach purely as well as hybrid, are evaluated to determine the Domain Relevancy of software components. It exploits the fact that Feature Vector codes can be seen as documents containing terms -the identifiers present in the components- and so text modeling methods that capture co-occurrence information in low-dimensional spaces can be used. The FV code representation of clusters or domains is used to find the domain-relevancy of the software components. PLSA has provided better results than LSA retrieval techniques in terms of Precision and Recall but its time complexity is too high. SVD Transformation with Naive Bayes scheme has outperformed all other approaches and shows better results than the existing approach (LSA) being used by some open source code repositories e.g. Sourceforge. The DR-value determined is close to the manual analysis, used to be performed by the programmers/repository managers. Hence, the tool can also be utilized for the automatic categorization of software components and this kind of automation may improve the productivity and quality of software development.

Journal ArticleDOI
TL;DR: A secure routing using antnet mechanism and mutual authentication using Elliptic Curve Cryptography (ECC) has been proposed to meet the requirements of prevention of DoS attacks on data traffic, and must have high speed, low energy overhead and scalability for future development.
Abstract: The secure end-to-end route discovery in the decentralized Mobile Adhoc Networks (MANETs) should have to meet the requirements of prevention of DoS attacks on data traffic, should be adaptive and fault tolerant and must have high speed, low energy overhead and scalability for future development. In this research a secure routing using antnet mechanism and mutual authentication using Elliptic Curve Cryptography (ECC) has been proposed to meet the above requirements. The common perception of public key cryptography is that it is not well suited for adhoc networks as they are very complex and slow. Against this popular belief, this research implements Elliptic Curve Cryptography -a public key cryptography scheme. ECC provides a similar level of security to conventional integer-based public-key algorithms, but with much shorter keys. Because of the shorter keys ECC algorithms run faster, require less space and consume less energy. These advantages make ECC a better choice of public key cryptography, especially for a resource constrained systems like MANETs. Using the antnet routing algorithm, the highly trustable route will be selected for data transfer and each Mobile Node (MN) in MANET maintains the trust value of its one-hop neighbors. The mutual authentication between source and destination is done by master key exchange using Elliptic Curve Cryptography (ECC). v

Journal ArticleDOI
TL;DR: A system to determine the location of a mobile terminal or a handheld PDA in high speed, low-cost wireless networks by using the wireless communications infrastructure and results reveal that the accuracy of 0.05 m can be achieved.
Abstract: To date many positioning systems are available to determine or track a user's location; three main categories include Global Positioning System (GPS), wide area location system and indoor positioning system. GPS has its limitation (poor signals) in indoor or urban uses, while wide-area location systems are cellular networks dependent. For indoor positioning system many approaches like infrared sensing, radio frequency, ultrasonic etc. are proposed; each of these methods has their own advantages and disadvantages. Considering cost-effectiveness, speed and accuracy a recent interest is growing on using wireless technology. A Wireless Local-Area Network (WLAN) based positioning system has some distinct advantages like low-cost and wider area coverage. This researrch propose a system to determine the location of a mobile terminal or a handheld PDA in high speed, low-cost wireless networks by using the wireless communications infrastructure. The experimental set up used an indoor wireless facility of an auditorium, where signals from three Access Points (APs) were recorded to train a position determination model to calculate and map a position. Grid model was applied and compare the resulted position of a client. A handheld PDA equipped with application software was the client device. The accuracy assessment has been performed to identify the distance errors and the average distance error was found lowest for the grid model. The results of the experiments reveal that the accuracy of 0.05 m can be achieved.

Journal ArticleDOI
TL;DR: If three key technologies were implemented together namely biometrics, smart cards, and PKI, then they can deliver a robust and trusted identification and authentication infrastructure that may provide the foundation for e-government and e-commerce initiatives as it addresses the need for strong user authentication of virtual identities.
Abstract: This article looks at one of the evolving crimes of the digital age; identity theft. It argues and explains that if three key technologies were implemented together namely biometrics, smart cards, and PKI, then they can deliver a robust and trusted identification and authentication infrastructure. The article concludes that such infrastructure may provide the foundation for e-government and e-commerce initiatives as it addresses the need for strong user authentication of virtual identities.

Journal ArticleDOI
TL;DR: A computer aided pathological speech therapy program, based on speech models such as the hidden Markov model and artificial intelligence networks, in order to help persons, suffering from language pathologies, follow a correction learning process, with different interactive feedbacks, aiming to evaluate the degree of evolution of the illness or the therapy.
Abstract: This article concerns a computer aided pathological speech therapy program, based on speech models such as the hidden Markov model and artificial intelligence networks, in order to help persons, suffering from language pathologies, follow a correction learning process, with different interactive feedbacks, aiming to evaluate the degree of evolution of the illness or the therapy. We dealt with the Arabic occlusive sigmatism as a prime approach, which is the inability to pronounce the[s] or [∫]. Results obtained are satisfying and the therapy program is prepared, for autonomous use by patients, for deep analysis and verifications.