scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Computer Science in 2009"


Journal ArticleDOI
TL;DR: The result presented here show that NN can be effectively employed in radar classification applications and is compared to the K Nearest Neighbor classifier.
Abstract: Problem statement: This study unveils the potential and utilization of Neural Network (NN) in radar applications for target classification. The radar system under test is a special of it kinds and known as Forward Scattering Radar (FSR). In this study the target is a ground vehicle which is represented by typical public road transport. The features from raw radar signal were extracted manually prior to classification process using Neural Network (NN). Features given to the proposed network model are identified through radar theoretical analysis. Multi-Layer Perceptron (MLP) back-propagation neural network trained with three back-propagation algorithm was implemented and analyzed. In NN classifier, the unknown target is sent to the network trained by the known targets to attain the accurate output. Approach: Two types of classifications were analyzed. The first one is to classify the exact type of vehicle, four vehicle types were selected. The second objective is to grouped vehicle into their categories. The proposed NN architecture is compared to the K Nearest Neighbor classifier and the performance is evaluated. Results: Based on the results, the proposed NN provides a higher percentage of successful classification than the KNN classifier. Conclusion/Recommendation: The result presented here show that NN can be effectively employed in radar classification applications.

61 citations


Journal ArticleDOI
TL;DR: An iris recognition system that produces very low error rates was successfully designed and used the discrete cosine transform for feature extraction and artificial neural networks for classification.
Abstract: Problem statement: The study presented an efficient Iris recognition system. Approach: The design used the discrete cosine transform for feature extraction and artificial neural networks for classification. The iris images used in this system were obtained from the CASIA database. Results: A robust system for iris recognition was developed. Conclusion: An iris recognition system that produces very low error rates was successfully designed.

56 citations


Journal ArticleDOI
TL;DR: This study proposed a new framework of an image steganography system to hide a digital text of a secret message in the pixels of the image in such a manner that the human visual system is not able to distinguish between the original and the stego-image, but it can be easily performed by a specialized reader machine.
Abstract: Problem statement: Steganography hides the very existence of a message so that if successful it generally attracts no suspicion at all. Using steganography, information can be hidden in carriers such as images, audio files, text files, videos and data transmissions. In this study, we proposed a new framework of an image steganography system to hide a digital text of a secret message. Approach: The main idea for this is to use enough number of bits from each pixel in an image (7-bits in this study) to map them to 26 alphabetic English characters ('a'…'z') with some special characters that are mostly using in writing a secret message. The main goal of this method, like any steganography techniques must do, is to hide a text of a secret message in the pixels of the image in such a manner that the human visual system is not able to distinguish between the original and the stego-image, but it can be easily performed by a specialized reader machine. Results: This method was implemented practically on different (long and short) messages and images. The carrier images that are used in the experiments of this research have no discernible change in it. Conclusion: The recorded experimental results showed that this proposed method can be used effectively in the field of steganography.

53 citations


Journal ArticleDOI
TL;DR: The system successfully designs and implement a neural network which efficiently go without demands, after that the system are able to understand the Arabic numbers that was written manually by users.
Abstract: Problem statement: Handwriting number recognition is a challenging problem researchers had been research into this area for so long especially in the recent years. In our study there are many fields concern with numbers, for example, checks in banks or recognizing numbers in car plates, the subject of digit recognition appears. A system for recognizing isolated digits may be as an approach for dealing with such application. In other words, to let the computer understand the Arabic numbers that is written manually by users and views them according to the computer process. Scientists and engineers with interests in image processing and pattern recognition have developed various approaches to deal with handwriting number recognition problems such as, minimum distance, decision tree and statistics. Approach: The main objective for our system was to recognize isolated Arabic digits exist in different applications. For example, different users had their own handwriting styles where here the main challenge falls to let computer system understand these different handwriting styles and recognize them as standard writing. Result: We presented a system for dealing with such problem. The system started by acquiring an image containing digits, this image was digitized using some optical devices and after applying some enhancements and modifications to the digits within the image it can be recognized using feed forward back propagation algorithm. The studies were conducted on the Arabic handwriting digits of 10 independent writers who contributed a total of 1300 isolated Arabic digits these digits divided into two data sets: Training 1000 digits, testing 300 digits. An overall accuracy meet using this system was 95% on the test data set used. Conclusion: We developed a system for Arabic handwritten recognition. And we efficiently choose a segmentation method to fit our demands. Our system successfully designs and implement a neural network which efficiently go without demands, after that the system are able to understand the Arabic numbers that was written manually by users.

51 citations


Journal ArticleDOI
TL;DR: Extracting and selecting statistical features from handwritten Arabic letters, their main bodies and t heir secondary components provided feature subsets that give higher recognition accuracies compared to the subsets of the whole letters alone.
Abstract: Problem statement: Offline recognition of handwritten Arabic text awa its accurate recognition solutions. Most of the Arabic letters h ave secondary components that are important in recognizing these letters. However these components have large writing variations. We targeted enhancing the feature extraction stage in recognizi ng handwritten Arabic text. Approach: In this study, we proposed a novel feature extraction appro ach of handwritten Arabic letters. Pre-segmented letters were first partitioned into main body and s econdary components. Then moment features were extracted from the whole letter as well as from the main body and the secondary components. Using multi-objective genetic algorithm, efficient featur e subsets were selected. Finally, various feature subsets were evaluated according to their classific ation error using an SVM classifier. Results: The proposed approach improved the classification error in all cases studied. For example, the improvements of 20-feature subsets of normalized ce ntral moments and Zernike moments were 15 and 10%, respectively. Conclusion/Recommendations: Extracting and selecting statistical features from handwritten Arabic letters, their main bodies and t heir secondary components provided feature subsets that give higher recognition accuracies compared to the subsets of the whole letters alone.

50 citations


Journal ArticleDOI
TL;DR: The proposed algorithm outperformed both static and adaptive Huffman algorithms, in terms of compression ratio and was well suited to embedding in sensor nodes for compressed data communication.
Abstract: Problem statement: Efficient utilization of energy has been a core area of research in wireless sensor networks. Sensor nodes deployed in a network are battery operated. As batteries cannot be recharged frequently in the field setting, energy optimization becomes paramount in prolonging the battery-life and, consequently, the network lifetime. The communication module utilizes a major part of the energy expenditure of a sensor node. Hence data compression methods to reduce the number of bits to be transmitted by the communication module will significantly reduce the energy requirement and increase the lifetime of the sensor node. The present objective of the study contracted with the designing of efficient data compression algorithm, specifically suited to wireless sensor network. Approach: In this investigation, the natural correlation in a typical wireless sensor network data was exploited and a modified Huffman algorithm suited to wireless sensor network was designed. Results: The performance of the modified adaptive Huffman algorithm was analyzed and compared with the static and adaptive Huffman algorithm. The results indicated better compression ratio. Conclusion: Hence the proposed algorithm outperformed both static and adaptive Huffman algorithms, in terms of compression ratio and was well suited to embedding in sensor nodes for compressed data communication.

44 citations


Journal ArticleDOI
TL;DR: A fragile image authentication system with tamper localization in wavelet domain with secret data to be embedded is a logo, which can be used in insurance, forensics departments and other applications.
Abstract: Problem statement: In recent years, as digital media are gaining wider popularity, their security related issues are becoming greater concern. Method for authenticating and assuring the integrity of the image is required. Image authentication is possible by embedding a layer of the authentication signature into the digital image using a digital watermark. In some applications tamper localization is also required. Approach: In this study, we proposed a fragile image authentication system with tamper localization in wavelet domain. In this scheme, secret data to be embedded is a logo. Watermark was generated by repeating logo image so that size of watermark matches with the size of HH sub-band of integer wavelet transform. To provide additional level of security, the generated watermark was scrambled using a shared secret key. Integer Haar wavelet transform was applied to obtain wavelet coefficients. Watermark was embedded into the coefficients using odd-even mapping. Results: Experimental results demonstrated that proposed scheme detected and localized tampering at pixel level. Proposed scheme was tested with images of various sizes and tampering of various sizes. It provided good results for tamperings ranges from single pixel to a block of pixels. Conclusion: Watermarking was done in wavelet domain; conventional watermarking attacks were not possible. The resolution of tamper localization was achieved at pixel level. The watermarked image's quality was still maintained while providing pixel-level tampering accuracy. Proposed scheme can be used in insurance, forensics departments.

40 citations


Journal ArticleDOI
TL;DR: A cane that has the ability of getting SIFT feature for an object or site from a sequence of live images using the suggested approach is very satisfactory and gives a wide range indoor navigation and may be used to outdoor.
Abstract: Problem Statement: Image base methods are a new approach for solving problems of navigations for visually impaired people. Approach: The study introduced a new approach of an electronic cane for blind people using the environment represented as a weighted topological graph instead, each node contains images taken at some poses in the work space, instead of building a metric (3D) model of the environment. Results: By computing weights between already stored images and the real scene of the environment and take some considerations like sessions. The system gives advices for the blind person to select the right direction in indoor navigate depending on weights and session, where a mono camera cane-held gives information in front of the visually impaired person. Conclusion: A cane that has the ability of getting SIFT feature for an object or site from a sequence of live images using the suggested approach is very satisfactory, The session and weight, speed up the system and gives a wide range indoor navigation and may be used to outdoor. Experimental results demonstrated a good performance of proposed method, the identification of different scenes to the blind person done by constructing the weighted visual environment graph to the system. The proposed scheme is using SIFT features to represent the objects and the sites.

36 citations


Journal ArticleDOI
TL;DR: Several fault and load disturbance simulation results are presented to stress the effectiveness of the proposed TCSC controller in a multi-machine power system and show that the proposed intelligent controls improve the dynamic performance of the TCSC devices and the associated power network.
Abstract: This study applies a neural-network-based optimal TCSC controller for damping oscillations. Optimal neural network controller is related to model-reference adaptive control, the network controller is developed based on the recursive “pseudo-linear regression. Problem statement: The optimal NN controller is designed to damp out the low frequency local and inter-area oscillations of the large power system. Approach: Two multilayer-perceptron neural networks are used in the design-the identifier/model network to identify the dynamics of the power system and the controller network to provide optimal damping. By applying this controller to the TCSC devices the damping of inter-area modes of oscillations in a multi-machine power system will be handled properly. Results: The effectiveness of the proposed optimal controller is demonstrated on two power system problems. The first case involves TCSC supplementary damping control, which is used to provide a comprehensive evaluation of the learning control performance. The second case aims at addressing a complex system to provide a very good solution to oscillation damping control problem in the Southern Malaysian Peninsular Power Grid. Conclusion: Finally, several fault and load disturbance simulation results are presented to stress the effectiveness of the proposed TCSC controller in a multi-machine power system and show that the proposed intelligent controls improve the dynamic performance of the TCSC devices and the associated power network.

36 citations


Journal ArticleDOI
TL;DR: The results of this study showed that the MKS-SSVM was effective to detect diabetes disease diagnosis and this is very promising compared to the previously reported results.
Abstract: Problem statement: Research on Smooth Support Vector Machine (SSVM) is an active field in data mining. Many researchers developed the method to improve accuracy of the result. This study proposed a new SSVM for classification problems. It is called Multiple Knot Spline SSVM (MKS-SSVM). To evaluate the effectiveness of our method, we carried out an experiment on Pima Indian diabetes dataset. The accuracy of previous results of this data still under 80% so far. Approach: First, theoretical of MKS-SSVM was presented. Then, application of MKS-SSVM and comparison with SSVM in diabetes disease diagnosis were given. Results: Compared to the SSVM, the proposed MKS-SSVM showed better performance in classifying diabetes disease diagnosis with accuracy 93.2%. Conclusion: The results of this study showed that the MKS-SSVM was effective to detect diabetes disease diagnosis and this is very promising compared to the previously reported results.

35 citations


Journal ArticleDOI
TL;DR: The proposed method work involved inscribing the text in the document by an arbitrary polygon and derivation of the baseline from polygon’s centroid and it was proved their suitability to work with Documents with different fonts and documents with different resolutions.
Abstract: Problem statement: Skew detection and correction is the first step process in the document analysis and understanding processing steps. Correction the skewed scanned document image is very important, because it has a direct effect on the reliability and efficiency of the segmentation and feature extraction stages. The noises and the deviation in the document resolution or types are still the main two challenges facing the Arabic skew detection and correction methods. Approach: The proposed method work involved inscribing the text in the document by an arbitrary polygon and derivation of the baseline from polygon’s centroid. Results: The proposed method was implemented on 150 different scanned Arabic documents, from different sources like journals, textbooks, newspapers and the like in addition to handwritten document, with different resolutions and different fonts and it was obtained an accuracy ratio of 87%. Conclusion: The proposed method was efficient, simple and fast, it was not affected by noise and it was proved their suitability to work with documents with different fonts and documents with different resolutions.

Journal ArticleDOI
TL;DR: The experiments showed that the incorporation of fuzzy logic with swarm intelligence could play an important role in the selection process of the most important sentences to be included in the final summary of automatic text summarization systems.
Abstract: Problem statement: The aim of automatic text summarization systems is to select the most relevant information from an abundance of text sources. A daily rapid growth of data on the internet makes the achieve events of such aim a big challenge. Approach: In this study, we incorporated fuzzy logic with swarm intelligence; so that risks, uncertainty, ambiguity and imprecise values of choosing the features weights (scores) could be flexibly tolerated. The weights obtained from the swarm experiment were used to adjust the text features scores and then the features scores were used as inputs for the fuzzy inference system to produce the final sentence score. The sentences were ranked in descending order based on their scores and then the top n sentences were selected as final summary. Results: The experiments showed that the incorporation of fuzzy logic with swarm intelligence could play an important role in the selection process of the most important sentences to be included in the final summary. Also the results showed that the proposed method got a good performance outperforming the swarm model and the benchmark methods. Conclusion: Incorporating more than one technique for dealing with the sentence scoring proved to be an effective mechanism. The PSO was employed for producing the text features weights. The purpose of this process was to emphasize on dealing with the text features fairly based on their importance and to differentiate between more and less important features. The fuzzy inference system was employed to determine the final sentence score, on which the decision was made to include the sentence in the summary or not.

Journal ArticleDOI
TL;DR: The concepts, tool and methodology being used for evaluation analysis of different frameworks by Analytic Hierarchy Process (AHP) are described and it is found that AHP is fairly good tool in terms of analysis.
Abstract: Problem statement: The Command, Control, Communications, Computers and Intelligence (C4I) Systems provided situational awareness about operational environment and supported in decision making and directed to operative environment. These systems had been used by various agencies like defense, police, investigation, road, rail, airports, oil and gas related department. However, the increase use of C4I system had made it more important and attractive. Consequently interest in design and development of C4I system had increased among the researchers. Many defense industry frameworks were available but the problem was a suitable selection of a framework in design and development of C4I system. Approach: This study described the concepts, tool and methodology being used for evaluation analysis of different frameworks by Analytic Hierarchy Process (AHP). Results: We had compared different defense industry frameworks like Department of Defense Architecture Framework (DODAF), Ministry of Defense Architecture Framework (MODAF) and NATO Architecture Framework (NAF) and found that AHP is fairly good tool in terms of analysis. Conclusion: Different defense industry frameworks such as DODAF, MODAF and NAF had been evaluated and compared using AHP.

Journal ArticleDOI
TL;DR: In order to design the reversible ALU of a crypto-processor, reversible Carry Save Adder using Modified TSG (MTSG) gates and architecture of Montgomery multiplier were proposed and showed that modified designs perform better than the existing ones in terms of number of gates, number of garbage outputs and quantum cost.
Abstract: Problem Statement: Arithmetic Logic Unit (ALU) of a crypto-processor and microchips leak information through power consumption. Although the cryptographic protocols are secured against mathematical attacks, the attackers can break the encryption by measuring the energy consumption. Approach: To thwart attacks, this study proposed the use of reversible logic for designing the ALU of a crypto-processor. Ideally, reversible circuits do not dissipate any energy. If reversible circuits are used, then the attacker would not be able to analyze the power consumption. In order to design the reversible ALU of a crypto-processor, reversible Carry Save Adder (CSA) using Modified TSG (MTSG) gates and architecture of Montgomery multiplier were proposed. For reversible implementation of Montgomery multiplier, efficient reversible multiplexers and sequential circuits such as reversible registers and shift registers were presented. Results: This study showed that modified designs perform better than the existing ones in terms of number of gates, number of garbage outputs and quantum cost. Lower bounds of the proposed designs were established by providing relevant theorems and lemmas. Conclusion: The application of reversible circuit is suitable to the field of hardware cryptography.

Journal ArticleDOI
TL;DR: It is concluded that it is possible to minimize the supply chain cost by maintaining the optimal stock levels that were predicted from the inventory analysis, which will make the inventory management further effective and efficient thereby enhancing the customer servicing levels.
Abstract: Problem statement: Today, inventory management is considered to be an important field in Supply chain management. Once the efficient and effective management of inventory is carried out throughout the supply chain, service provided to the customer ultimately gets enhanced. Hence, to ensure minimal cost for the supply chain, the determination of the level of inventory to be held at various levels in a supply chain is unavoidable. Minimizing the total supply chain cost refers to the reduction of holding and shortage cost in the entire supply chain. Efficient inventory management is a complex process which entails the management of the inventory in the whole supply chain and getting the final solution as an optimal one. In other words, during the process of supply chain management, the stock level at each member of the supply chain should account to minimum total supply chain cost. The dynamic nature of the excess stock level and shortage level over all the periods is a serious issue when implementation was considered. In addition, consideration of multiple products leads to very complex inventory management process. The complexity of the problem increases when more distribution centers and agents were involved. Approach: In present research, the issues of inventory management had been focused and a novel approach based on genetic algorithm had been proposed in which the most probable excess stock level and shortage level required for inventory optimization in the supply chain is distinctively determined so as to achieve minimum total supply chain cost. Results: The analysis provided us with an inventory level that made a remarkable contribution towards the increase of supply chain cost. We predicted the optimal inventory levels in all the supply chain members with the aid of these levels. Conclusion: We concluded that it is possible to minimize the supply chain cost by maintaining the optimal stock levels that we predicted from the inventory analysis. This will make the inventory management further effective and efficient thereby enhancing the customer servicing levels.

Journal ArticleDOI
TL;DR: The proposed system was based on Microcontroller, Bluetooth and Java technology and in order to achieve the idea of an intelligence car with ability to uses personal mobile hand phone as a remote interface, and is able to control some of the car accessories by using mobile phone.
Abstract: Problem statement: The car users expect more and more accessories available in their cars, but the accessories available needed manage by driver manually and not properly manage by smart system. All these accessories are able to control by user manually using different and standalone controllers. Besides, the controller itself uses RF technology which is not existed in mobile devices. So there is lack of a comprehensive and integrated system to manage, control and monitor all the accessories inside the vehicle by using a personal mobile phone. Design and development of an integrated system to manage and control all kind of inter vehicle accessories, improving the efficiency and functionality of inter vehicle communications for the car users. Approach: The proposed system was based on Microcontroller, Bluetooth and Java technology and in order to achieve the idea of an intelligence car with ability to uses personal mobile hand phone as a remote interface. Development strategies for this innovation are includes two phases: (1) java based application platform-designed and developed for smart phones and PDAs (2) hardware design and implementation of the receiver side-compatible smart system to managing and interconnection between all inside accessories based on monitoring and controlling mechanisms by Bluetooth media. Results: The designed system included hardware and software and the completed prototype had tested successfully on the real vehicles. During the testing stage, the components and devices were connected and implemented on the vehicle and the user by installing the system interface on a mobile phone is able to monitor and manage the vehicle accessories, the efficiency, adaptively and range of functionality of the system has proved with the various car accessories. Conclusion: This study involved design a new system to decrease the hot temperature inside a car that affecting the health of the car driver and the car driver is able to control some of the car accessories by using mobile phone. Once the car was equipped with the Bluetooth module and control system, the car accessories is able to connect with microcontroller and control by the mobile application.

Journal ArticleDOI
TL;DR: The designed algorithms were intended to help in proposed system aim to hide and retract information (data file) with in unused area 2 of any execution file (exe.file) to prevent the hidden information to observation of these systems and the exe.file still function as usual after the hiding process.
Abstract: Problem statement: The executable files are one of the most important files in operating systems and in most systems designed by developers (programmers/software engineers), and then hiding information in these file is the basic goal for this study, because most users of any system cannot alter or modify the content of these files. There are many challenges of hidden data in the unused area two within executable files, which is d ependencies of the size of the cover file with the size of hidden information, differences of the size of file before and after the hiding process, availability of the cover file after the hiding pro cess to perform normally and detection by antivirus software as a result of changes made to the file. Approach: The system designed to accommodate the release mechanism that consists of two functions; f irst is the hiding of the information in the unused area 2 of PE-file (exe.file), through the execution of four process (specify the cover file, specify t he information file, encryption of the information, an d hiding the information) and the second function i s the extraction of the hiding information through th ree process (specify the steno file, extract the information, and decryption of the information). Results: The programs were coded in Java computer language and implemented on Pentium PC. The designed algorithms were intended to help in proposed system aim to hide and retract information (data fi le) with in unused area 2 of any execution file (exe.file). Conclusion: Features of the short-term responses were simulate d that the size of the hidden data does depend on the size of the unused area2 wi thin cover file which is equal 20% from the size of exe.file before hiding process, most antivirus syst ems do not allow direct write in executable file, s o the approach of the proposed system is to prevent t he hidden information to observation of these systems and the exe.file still function as usual af ter the hiding process

Journal ArticleDOI
TL;DR: This study presented the development of an Arabic part-of-speech tagger that can be used for analyzing and annotating traditional Arabic texts, especially the Quran text, by developing and using an appropriate tagger.
Abstract: Problem statement: This study presented the development of an Arabic part-of-speech tagger that can be used for analyzing and annotating traditional Arabic texts, especially the Quran text. Approach: It is a part of a project related to the computerization of the Holy Quran. One of the main objectives in this project was to build a textual corpus of the Holy Quran. Results: Since an appropriate textual version of the Holy Quran was prepared and morphologically analyzed in other stages of this project, we focused in this work on its annotation by developing and using an appropriate tagger. The developed tagger employed an approach that combines morphological analysis with Hidden Markov Models (HMMs) based-on the Arabic sentence structure. The morphological analysis is used to reduce the size of the tags lexicon by segmenting Arabic words in their prefixes, stems and suffixes; this is due to the fact that Arabic is a derivational language. On another hand, HMM is used to represent the Arabic sentence structure in order to take into account the linguistic combinations. For these purposes, an appropriate tagging system has been proposed to represent the main Arabic part of speech in a hierarchical manner allowing an easy expansion whenever it is needed. Each tag in this system is used to represent a possible state of the HMM and the transitions between tags (states) are governed by the syntax of the sentence. A corpus of some traditional texts, extracted from Books of third century (Hijri), is manually morphologically analyzed and tagged using our developed tagset. Conclusion/Recommendations: It is then used for training and testing this model. Experiments conducted on this dataset gave a recognition rate of about 96% and thus are very promising compared to the data size tagged till now and used in the training. Since our Holy Quran corpus is still under revision, we did not make significant experiments on it. However, preliminary tests conducted on the seven verses of AL-Fatiha showed an encouraging accuracy rate.

Journal ArticleDOI
TL;DR: The results of the evaluation of the genetic-based feature construction algorithm showed that the data summarization results can be improved by constructing features by using the Cluster Entropy (CE) genetic-by-feature construction algorithm.
Abstract: Problem statement: The importance of input representation has been recognized already in machine learning. Feature construction is one of the methods used to generate relevant features for learning data. This study addressed the question whether or not the descriptive accuracy of the DARA algorithm benefits from the feature construction process. In other words, this paper discusses the application of genetic algorithm to optimize the feature construction process to generate input data for the data summarization method called Dynamic Aggregation of Relational Attributes (DARA). Approach: The DARA algorithm was designed to summarize data stored in the non-target tables by clustering them into groups, where multiple records stored in non-target tables correspond to a single record stored in a target table. Here, feature construction methods are applied in order to improve the descriptive accuracy of the DARA algorithm. Since, the study addressed the question whether or not the descriptive accuracy of the DARA algorithm benefits from the feature construction process, the involved task includes solving the problem of constructing a relevant set of features for the DARA algorithm by using a genetic-based algorithm. Results: It is shown in the experimental results that the quality of summarized data is directly influenced by the methods used to create patterns that represent records in the (n×p) TF-IDF weighted frequency matrix. The results of the evaluation of the genetic-based feature construction algorithm showed that the data summarization results can be improved by constructing features by using the Cluster Entropy (CE) genetic-based feature construction algorithm. Conclusion: This study showed that the data summarization results can be improved by constructing features by using the cluster entropy genetic-based feature construction algorithm.

Journal ArticleDOI
TL;DR: A very light-weight, robust and reliable model for service discovery in wireless and mobile networks by taking into account the limited resources to which are subjected the mobile units is proposed, significantly reducing the cost of message overhead and having the best delay values when compared with strategies well-known in the literature.
Abstract: Problem statement: In mobile ad hoc networks devices do not rely on a fixed infrastructure and thus have to be self-organizing. This gives rise to various challenges to network applications. Existing service discovery protocols fall short of accommodating the complexities of the ad-hoc environment. However, the performance of distributed service discovery architectures that rely on a virtual backbone for locating and registering available services appeared very promising in terms of average delay but in terms of message overhead, are the most heavy-weight. In this research we propose a very light-weight, robust and reliable model for service discovery in wireless and mobile networks by taking into account the limited resources to which are subjected the mobile units. Approach: Three processes are involved in service discovery protocols using virtual dynamic backbone for mobile ad hoc networks: registration, discovery and consistency maintenance. More specifically, the model analytically and realistically differentiates stable from unstable nodes in the network in order to form a subset of nodes constituting a relatively stable virtual Backbone (BB). Results: Overall, results acquired were very satisfactory and meet the performance objectives of effectiveness especially in terms of network load. A notable reduction of almost 80% of message signaling was observed in the network. This criterion distinguishes our proposal and corroborate to its light-weight characteristic. On the other hand, results showed reasonable mean time delay to the requests initiated by the clients. Conclusion: Extensive simulation results obtained confirm the efficiency and the light-weight characteristic of our approach in significantly reducing the cost of message overhead in addition to having the best delay values when compared with strategies well-known in the literature.

Journal ArticleDOI
TL;DR: This study presented a set of comments and suggestions to improve the ISO 9126 and highlighted the weaknesses of the cross-references between the two ISO standards.
Abstract: Problem statement: The International Organization for Standardization (ISO) published a set of international standards related to the software engineering, such as ISO 12207 and ISO 9126. However, there is a set of cross-references between these two standards. Approach: The ISO 9126 on software product quality and ISO 12207 on software life cycle processes had been analysed to investigate the relationships between them and to make a mapping from the ISO 9126 quality characteristics to the ISO 12207 activities and vers versa. Results: This study presented a set of comments and suggestions to improve the ISO 9126. Conclusion: The weaknesses of the cross-references between the two ISO standards had been highlighted. In addition, this study provided a number of comments and suggestions to be taken into account on the next version of the ISO 9126 international standard.

Journal ArticleDOI
TL;DR: The results indicated that theoretical bases can enhance efficiency and performance of automatic programming system, leading to an increase in the system productivity and letting the concentrate to be done on problem specification only.
Abstract: Problem statement: The goal of automatic programming system is to create, in an automated way, a computer program that enables a computer to solve a problem. It is difficult to build an automatic programming system: They require carefully designed specification languages and an intimate knowledge base. Determine the relevance of mathematical system theory to the problems of automatic programming and find automatic programming methodology, where a computer program evolved to solve problem by using problem’s input output specifications only. Approach: Problem behavior was described as a finite state automata based on its meaning, also problem’s input-output specifications were described in theoretical manner, based on its input and output trajectories information, then a program was evolved to solve the problem. Different implementation languages can be used without significantly affecting existing problem specification. Evolutionary process adapts ant colony optimization algorithm to find good finite state automata that efficiently satisfies input-output specifications. Results: By moving from state to states, each ant incrementally constructs sub-solution in an iterative process. The algorithm converged to the optimal final solution, by accumulating most effective sub-solutions; main problem will appeared in solving problem with little input-output specifications. Fixed and dynamic input-output specifications were used to mimic chaotic behavior of real world. Conclusion: These results indicated that theoretical bases can enhance efficiency and performance of automatic programming system, leading to an increase in the system productivity and letting the concentrate to be done on problem specification only. Also, the collective behavior emerging from the interaction of the different ants had proved effective in solving problem; finally, in dynamic input-output specification chaos theory, especially “butterfly effect”, can be used to control the sensitivity to initial configuration of trajectory information.

Journal ArticleDOI
TL;DR: While increasing the value of C (constant C the first parameter in the practical approach of detecting the edge), this approach can be used to get the best edge map and to get a clear edge map, which can be use later in image segmentation and object extraction.
Abstract: Problem statement: practical approach of detecting edge map was proposed. Approach: The methodology of this approach was presented and tested in order to select the best value of the operator, used to smooth and get the gradient, and the threshold value used to convert the gray gradient to binary edge map, so a practical value of the threshold and edge operator coefficient was investigated, these values used to calculate the gradient in order to get a better edge-map. Results: While increasing the value of C (constant C the first parameter in our practical approach of detecting the edge), we narrow the range of t and at the same time the value of the suitable t will be increased toward 1. Conclusion: This approach can be used to get the best edge map and to get a clear edge map, which can be used later in image segmentation and object extraction.

Journal ArticleDOI
TL;DR: The proposed system showed promising results than individual face or ear biometrics investigated in the experiments, and displayed if combined face and ear is a good technique because it offered a high accuracy and security.
Abstract: Problem statement: The study presented in this study to combined face and ear algorithms as an application of human identification. Biometric system to the detection and identification of human faces and ears developed a multimodal biometric system using eigenfaces and eigenears. Approach: The proposed system used the extracted face and ear images to develop the respective feature spaces via the PCA algorithm called eigenfaces and eigenears, respectively. The proposed system showed promising results than individual face or ear biometrics investigated in the experiments. Results: The final achieve was then used to affirm the person as genuine or an impostor. System was tested on several databases and gave an overall accuracy of 92.24% with FAR of 10% and FRR of 6.1%. Conclusion: The results display if we combined face and ear is a good technique because it offered a high accuracy and security.

Journal ArticleDOI
TL;DR: Development of a safety-critical system based on the proposed software safety model significantly enhanced the safe operation of the overall system.
Abstract: Software for safety-critical systems has to deal with the hazards identified by safety analysis in order to make the system safe, risk-free and fail-safe. Software safety is a composite of many factors. Problem statement: Existing software quality models like McCall’s and Boehm’s and ISO 9126 were inadequate in addressing the software safety issues of real time safety-critical embedded systems. At present there does not exist any standard framework that comprehensively addresses the Factors, Criteria and Metrics (FCM) approach of the quality models in respect of software safety. Approach: We proposed a new model for software safety based on the McCall’s software quality model that specifically identifies the criteria corresponding to software safety in safety critical applications. The criteria in the proposed software safety model pertains to system hazard analysis, completeness of requirements, identification of software-related safety-critical requirements, safety-constraints based design, run-time issues management and software safety-critical testing. Results: This model was applied to a prototype safety-critical software-based Railroad Crossing Control System (RCCS). The results showed that all critical operations were safe and risk-free, capable of handling contingency situations. Conclusion: Development of a safety-critical system based on our proposed software safety model significantly enhanced the safe operation of the overall system.

Journal ArticleDOI
TL;DR: The new factor that was considered namely, the job ratio can reduce the job turnaround time by submitting jobs in batches rather than submitting the jobs one by one.
Abstract: Problem statement: Meta-scheduling has become very important due to the increased number of submitted jobs for execution. Approach: We considered the job type in the scheduling decision that was not considered previously. Each job can be categorized into two types namely, data-intensive and computational-intensive in a specific ratio. Job ratio reflected the exact level of the job type in two specific numbers in the form of ratio and was computed to match the appropriate sites for the jobs in order to decrease the job turnaround time. Moreover, the number of jobs in the queue was considered in the batch decision to ensure server-load balancing. Results: The new factor that we considered namely, the job ratio can reduce the job turnaround time by submitting jobs in batches rather than submitting the jobs one by one. Conclusion: Our proposed system can be implemented in any middleware to provide job scheduling service.

Journal ArticleDOI
TL;DR: A framework based on the Hartung technique which depended on spread spectrum communication in discrete cosine transform (DCT) and was applicable not only to MPEG-2 video, but also to other DCT coding videos like MPEG-1,H261 and H263.
Abstract: Problem statement: Nowadays, digital watermarks have recently become recognized as a solution for protecting copyright of the digital multimedia. For a watermark to be useful, it must be perceptually invisible and robust against any possible attack and image processing by those who seek to corsair the material, researchers have considered various approaches like JPEG compression, geometric distortions and noising. Approach: We proposed a framework based on the Hartung technique which depended on spread spectrum communication in discrete cosine transform (DCT). Results: For the experimental results, researchers had considered various approaches: JPEG compression, geometric distortions and noising. Results showed a good performance in the proposed method. Conclusion: The presented technique was applicable not only to MPEG-2 video, but also to other DCT coding videos like MPEG-1,H261 and H263. Future study could be on improving DCT and comparing it with existing methods. It could be on discrete wavelet transform that is relatively new and has useful properties for the image processing applications.

Journal ArticleDOI
TL;DR: A new color model for digital image can be used to separate low and high frequencies in the image without loosing any information from the image by decreasing the computational time for various image-processing operations.
Abstract: Problem statement: A new color model for digital image was discussed; this model can be used to separate low and high frequencies in the image without loosing any information from the image. Approach: A comparative study between different color models (RGB, HSI) applied to a very large microscopic image analysis and the proposed model was presented. Such analysis of different color models is needed in order to carry out a successful detection and therefore a classification of different Regions of Interest (ROIs) within the image. Results: This, in turn, allows both distinguishing possible ROIs and retrieving their proper color for further ROI analysis. This analysis was not commonly done in many biomedical applications that deal with color images. Other important aspects were the computational cost of the different processing algorithms according to the color model. The proposed model took these aspects into consideration and the experimental results showed the advantages of proposed model compared with HSI model by decreasing the computational time for various image-processing operations. Conclusion: The proposed model can be used in different application such as separating low and high frequencies from the image.

Journal ArticleDOI
TL;DR: The numerical results showed that number of iterat ions required by the proposed algorithm to converge was less than the standard CG and NN algorithms.
Abstract: Problem statement: The Conjugate Gradient (CG) algorithm which usually used for solving nonlinear functions is presented and is com bined with the modified Back Propagation (BP) algorithm yielding a new fast training multilayer a lgorithm. Approach: This study consisted of determination of new search directions by exploitin g the information calculated by gradient descent as well as the previous search direction. The proposed algorithm improved the training efficiency of BP algorithm by adaptively modifying the initial searc h direction. Results: Performance of the proposed algorithm was demonstrated by comparing it with the Neural Network (NN) algorithm for the chosen test functions. Conclusion: The numerical results showed that number of iterat ions required by the proposed algorithm to converge was less than the bo th standard CG and NN algorithms. The proposed algorithm improved the training efficiency of BP-NN algorithms by adaptively modifying the initial search direction.

Journal ArticleDOI
TL;DR: The results found proved that, even a person with shallow knowledge in both artificial intelligence and laser processing can actually train the experimental data sets loaded into GUI, test and optimize ANFIS variables to make comparative analysis, and benefited precision machining industries in reducing their down time and cost.
Abstract: Problem statement: The power of Artificial Intelligent (AI) becomes more authoritative when the system is programmed to cater the need of complex applications. MATLAB 2007B, integrating artificial intelligent system and Graphical User Interface (GUI) has reduced researchers' fear-to-model factor due to unfamiliarity and phobia to produce program codes. Approach: In this study, how GUI was developed on Matlab to model laser machining process using Adaptive Network-based Fuzzy Inference System (ANFIS) was presented. Laser cutting machine is widely known for having the most number of controllable parameters among the advanced machine tools, hence become more difficult to engineer the process into desired responses; surface roughness and kerf width. Mastering both laser processing and ANFIS programming are difficult task for most researchers, especially for the difficult to model processes. Therefore, a new approach was ventured, where GUI was developed using MATLAB integrating ANFIS variables to model the laser processing phenomenon, in which the numeric and graphical output can be easily printed to interpret the results. Results: To investigate ANFIS variables’ characteristic and effect, error was analyzed via Root Mean Square Error (RMSE) and Average Percentage Error (APE). The RMSE values were then compared among various trained variables and settings to finalize best ANFIS predictive model. The results found was very promising and proved that, even a person with shallow knowledge in both artificial intelligence and laser processing can actually train the experimental data sets loaded into GUI, test and optimize ANFIS variables to make comparative analysis. Conclusion: The details of modeled work with prediction accuracy according to variable combinations were premeditated on another paper. The findings were expected to benefit precision machining industries in reducing their down time and cost as compared to the traditional way of trial and error method.