scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Advancements in Computing Technology in 2011"


Journal ArticleDOI
TL;DR: The characteristics of Mobile Ad hoc Networks (MANETs), and their Routing protocol, and a mobile ad hoc network which consists of set mobile wireless nodes and one fixed wireless server are design using OPNET Modeler 14.5.
Abstract: This paper first describes the characteristics of Mobile Ad hoc Networks (MANETs), and their Routing protocol, and second a mobile ad hoc network (MANET) which consists of set mobile wireless nodes (25, 50, 75, and 100) and one fixed wireless server are design using OPNET Modeler 14.5. The performance of this network under different routing protocol is analyzed by three metrics: delay, network load and throughput. The comparison analysis will carry out about these protocols and in the last the conclusion shows which routing protocol is the best one for mobile ad hoc networks.

50 citations


Journal ArticleDOI
TL;DR: An improvement is made to the conventional confusion-diffusion architecture by introducing a certain diffusion mechanism in confusion stage through a lightweight bitlevel permutation algorithm, and hence the overall security of the image cryptosystem is promoted.
Abstract: Confidentiality is an important issue when digital images are transmitted over public networks, and encryption is the most useful technique employed for this purpose. Image encryption is somehow different from text encryption due to some inherent features of image such as bulk data capacity and high correlation among adjacent pixels, which are generally difficult to handle by conventional algorithms. Recently, chaos-based encryption has suggested a new and efficient way to deal with the intractable problems of fast and highly secure image encryption. This paper proposes a chaos-based cryptosystem with a novel permutation process for secure and efficient image protection. To improve the security of the confusion module, an improvement is made to the conventional confusion-diffusion architecture by introducing a certain diffusion mechanism in confusion stage through a lightweight bitlevel permutation algorithm. As a result, the diffusion performance is significantly enhanced and hence the overall security of the image cryptosystem is promoted. Results of various types of analysis indicate that the proposed scheme provides a good balance between security and speed and hence suitable for real-time secure image communication applications.

40 citations


Journal ArticleDOI
TL;DR: Theoretical analysis shows that the algorithm based on randomized perturbation techniques and secure multiparty computation not only protect the users’ privacy, but also can keep the accuracy.
Abstract: With the evolution of the Internet, collaborative filtering techniques are becoming increasingly popular in E-commerce recommender systems. Such techniques recommend items to users employing similar users' preference data. People use recommender systems to cope with information overload. Although collaborative filtering systems are widely used by E-commerce sites, they fail to protect users' privacy. Since many users might decide to give false information because of privacy concerns, collecting high quality data from users is not an easy task. Collaborative filtering systems using these data might produce inaccurate recommendations. To reserve privacy in collaborative filtering recommender systems, this paper presented a collaborative filtering algorithm based on randomized perturbation techniques and secure multiparty computation. The randomized perturbation techniques are used in the course of user data collection and can generate recommendations with decent accuracy. Employing secure multiparty computation to protect the privacy of collaborative filtering by distributing the user profiles between multiple repositories and exchange only a subset of the profile data, which is useful for the recommendation. Theoretical analysis shows that the algorithm based on randomized perturbation techniques and secure multiparty computation not only protect the users’ privacy, but also can keep the accuracy.

39 citations


Journal ArticleDOI
TL;DR: This paper presents SQL injection attack types and also current techniques which can detect or prevent these attacks, and evaluates these techniques.
Abstract: SQL injection is a type of attack which the attacker adds Structured Query Language code to a web form input box to gain access or make changes to data. SQL injection vulnerability allows an attacker to flow commands directly to a web application's underlying database and destroy functionality or confidentiality. Researchers have proposed different tools to detect and prevent this vulnerability. In this paper we present SQL injection attack types and also current techniques which can detect or prevent these attacks. Finally we evaluate these techniques.

37 citations


Journal ArticleDOI
TL;DR: The experimental results indicate that the improved GPSR protocol has better performance when packet delivery rate, average end-to-end delay and average throughput are used as the performance metrics, and is better for VANET under the urban simulation scenario.
Abstract: The rapid movement of Vehicles which results in frequent changes in vehicle position and speed, as well as inaccuracies in predicting driver’s direction at intersections usually lead to wrong packet forwarding decisions in VANET. Consequently, a routing protocol incorporating vehicle density, moving direction and speed into GPSR in making packet forwarding decisions is hereby proposed. Key data structures were designed and MOVE was used to construct typical Grid map urban simulation scenario. The protocol was simulated using NS-2 under the urban simulation scenario and was compared with AODV and GPSR protocols. Our experimental results indicate that the improved GPSR protocol has better performance when packet delivery rate, average end-to-end delay and average throughput are used as the performance metrics, and is better for VANET under the urban simulation scenario.

31 citations


Journal ArticleDOI
TL;DR: In this study, the Software Quality Assurance (SQA) audit technique is applied to determine whether or not the required standards and procedures within the requirements specifications phase are being followed closely.
Abstract: Software Requirements Specifications (SRS) or software requirements are basically an organization’s interpretation of a customer’s system requirements and dependencies at a given point in time. Basically, good quality SRS will lead to good quality software product. It is widely known that companies pay much less to fix problems or defects that are found very early in any software development life cycle (SDLC). In this study, the Software Quality Assurance (SQA) audit technique is applied to determine whether or not the required standards and procedures within the requirements specifications phase are being followed closely. The proposed online SRS quality analysis system ensures that software requirements among others are complete, consistent, correct, modifiable, ranked, traceable, unambiguous, and understandable. The system interacts with the developer through a series of questions and answers session, and requests the developer to go through a checklist that corresponds to the list of desirable characteristics for SRS. The Case-Based Reasoning (CBR) technique is used to evaluate the requirements quality by referring to previously stored software requirements quality analysis cases (past experiences). CBR is an AI technique that reasons by remembering previously experienced cases. It assists in making the SRS quality analysis process more efficient. An executable prototype is developed to demonstrate several selected features and results of the proposed SRS quality analysis system.

27 citations



Journal ArticleDOI
TL;DR: In this article, Evaluating and Improving Wireless LANs Performance has been proposed to address mains parameters as Wireless LAN performance indicators and then to propose some methods to improve the performance of WLAN connection via OPNET Modeler 9.1., an advanced network simulator.
Abstract: A Wireless Local Area Network (WLAN) is a type of Local Area Network (LAN) that uses high frequency radio waves rather than wires to communicate and transmit data [1]. It is a flexible data communication system implemented as an extension to or as an alternative for, a wired LAN. It provides large companies the option to connect the current wired networks to the wireless network without any problems and gives user the option to use any kind of applications regardless of its source or vendors [2]. As data sharing is sometimes slow or worst slowness due to the performance of the WLAN which is a key factor in spreading and usage of such technologies. Hence it is necessary to identify the quality of the wireless connection and how to improve the communication through the WLAN. Therefore, in this paper, Evaluating and Improving Wireless LANs Performance has been proposed to address mains parameters as Wireless LAN performance indicators and then to propose some methods to improve the performance of WLAN connection via OPNET Modeler 9.1., an advanced network simulator.

26 citations


Journal ArticleDOI
TL;DR: Experimental results show that the solution of smart home system based on ZigBee technology can be served as practical application reliably.
Abstract: In this paper, current technologies which can be used in the wireless home networking area are reviewed and compared. Then a solution of smart home system based on ZigBee technology is put forward and discussed. The hardware design of ZigBee wireless sensor network based on CC2430 chip and embedded home gateway based on S3C2440 is also given in detail. The software workflow of coordinator and router are discussed and user-defined frame format is further discribed. Experimental results show that the solution of smart home system mentioned above can be served as practical application reliably.

26 citations


Journal ArticleDOI
TL;DR: The proposed framework not only solved data fusion and storage problem in transport priority schemes and emergency response scheme, but also has higher reliability and flexibility compare with the traditional city intelligent transportation system.
Abstract: Due to development and extension of city, we propose an extended city intelligent transportation system based on new concept: transport priority schemes and emergency response scheme. In this paper, we propose an extended collaborative traffic information collection, fusion and storage framework for city intelligent transportation system based on wireless sensor technology. The proposed framework not only solved data fusion and storage problem in transport priority schemes and emergency response scheme, but also has higher reliability and flexibility compare with the traditional city intelligent transportation system.

23 citations


Journal ArticleDOI
TL;DR: The advantages of the proposed RFID reader system based on P2P are: 1) extend the reading distance; 2) decrease the reading collision; and 3) improve the reading speed.
Abstract: With the development of Internet of Things (IOT), Radio Frequency Identification (RFID) is a hot topic in recent years. However, the traditional RFID reader system has some drawbacks, such as the effective reading distance of the reader is very small, and the reading speed of the reader is very slow. In this paper, we propose a novel RFID reader system framework, which is based on Peer-to-Peer (P2P) network. P2P overlay network is distributed system in nature, without any hierarchical organization or centralized control. In our proposed system, every reader is a peer in P2P network. The advantages of the proposed RFID reader system based on P2P are: 1) extend the reading distance; 2) decrease the reading collision; and 3) improve the reading speed.


Journal ArticleDOI
TL;DR: Popular adaptations of the simple GA, non-dominated sorting genetic algorithms, are used to solve robust control design problems to solve multi-objective optimization problems of robust controllers.
Abstract: A genetic algorithm (GA) for the class of multi-objective optimization problems that appears in the design of robust controllers is presented in this paper. The design of a robust controller is a trade-off problem among competitive objectives such as disturbance rejection, reference tracking, stability against un-modeled dynamics, moderate control effort and so on. However, general methodologies for solving this class of design problems are not easily encountered in the literature because of the complexity of the resultant multi-objective problems. In this paper, popular adaptations of the simple GA, non-dominated sorting genetic algorithms, are used to solve robust control design problems. The structure and operators of this algorithm have been specifically developed for control design problems. The performance of the algorithm is evaluated by solving several test cases and is also compared to the standard algorithms used for the multi-objective design of robust controllers.

Journal ArticleDOI
TL;DR: The numerical results and statistical analysis show that the proposed approach is capable of selecting a subset of predictive genes from a large noisy data set, and can capture the correlated structure in the data.
Abstract: Microarray data are highly redundant and noisy, and most genes are believed to be uninformative with respect to studied classes, as only a fraction of genes may present distinct profiles for different classes of samples. This paper proposed a novel hybrid framework (NHF) for the classification of high dimensional microarray data, which combined information gain(IG), F-score, genetic algorithm(GA), particle swarm optimization(PSO) and support vector machines(SVM). In order to identify a subset of informative genes embedded out of a large dataset which is contaminated with high dimensional noise, the proposed method is divided into three stages. In the first stage, IG is used to construct a ranking list of features, and only 10% features of the ranking list are provided for the second stage. In the second stage, PSO performs the feature selection task combining SVM. F-score is considered as a part of the objective function of PSO. The feature subsets are filtered according to the ranking list from the first stage, and then the results of it are supplied to the initialization of GA. Both the SVM parameter optimization and the feature selection are dynamically executed by PSO. In the third stage, GA initializes the individual of population from the results of the second stage, and an optimal result of feature selection is gained using GA integrating SVM. Both the SVM parameter optimization and the feature selection are dynamically performed by GA. The performance of the proposed method was compared with that of the PSO based, GA based, Ant colony optimization (ACO) based and simulated annealing (SA) based methods on five benchmark data sets, leukemia, colon, breast cancer, lung carcinoma and brain cancer. The numerical results and statistical analysis show that the proposed approach is capable of selecting a subset of predictive genes from a large noisy data set, and can capture the correlated structure in the data. In addition, NHF performs significantly better than the other methods in terms of prediction accuracy with smaller subset of features.

Journal ArticleDOI
TL;DR: The result shows that some preprocessing methods are stable and relatively effective, while the extended discernibility matrix method is not very effective in dealing with incomplete data.
Abstract: Rough set based rule induction approaches have been studied intensively during past few years. However, classical rough set model cannot deal with incomplete data sets. There are two main categories dealing with this problem: the preprocessing methods and the extensions of rough set model. This paper focuses on the comparison of three strategies for dealing with incomplete data containing three preprocessing methods and one extended discernibility matrix method. These three methods only different when building the discernibility matrix, and they have the same rule induction method. The result shows that some preprocessing methods are stable and relatively effective, while the extended discernibility matrix method is not very effective in dealing with incomplete data.




Journal ArticleDOI
TL;DR: This paper proposes Internet traffic classification based on fuzzy kernel K-Means clustering, which overcomes the dependence of clustering algorithm on sample distribution form and can classify Internet network traffic with high accuracy.
Abstract: Internet traffic classification based on flow statistics using machine learning method has attracted great attention. To solve the drawback of the fuzzy K-Means clustering algorithm to meet the requirements of the Internet network classification, we propose Internet traffic classification based on fuzzy kernel K-Means clustering. This method overcomes the dependence of clustering algorithm on sample distribution form. Experiment results illustrate this method can eliminate the influence of the shape of sample space on clustering accuracy. It also can classify Internet network traffic with high accuracy.

Journal ArticleDOI
TL;DR: This paper investigates the finite-time control problem for networked control systems (NCSs) with network-induced delay with a particular linear transformation introduced to convert the original time- delay system into a delay-free form.
Abstract: This paper investigates the finite-time control problem for networked control systems (NCSs) with network-induced delay. A particular linear transformation is introduced to convert the original time- delay system into a delay-free form. Based on finite-time stability theory combined with linear matrix inequalities (LMIs) techniques, a sufficient condition that ensures finite-time performance of networked control systems is derived. Numerical example illustrates the feasibility of the developed approaches.

Journal ArticleDOI
TL;DR: The system through deploy Citrix's XenApp application servers to achieve application virtualization and the distributed software resource sharing platform which mainly for small and medium company had being developed.
Abstract: The concept of cloud manufacturing is introduced and the importance of software resource sharing is elaborated. In cloud manufacturing system, server-based architecture is the best solution to provide solutions to address the application of the software resource requirements for small and medium enterprises(SMEs) . And the virtualization technology which have developed and mature will provide a strong technical support for it. The system through deploy Citrix's XenApp application servers to achieve application virtualization. On this basis, the distributed software resource sharing platform which mainly for small and medium company had being developed.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new algorithm that will work rapidly by using B-compress technique for association rule mining (B-Compress) which adopts three major ideas: Firstly, compress data. Secondly, reduce the amount of times to scan database tremendously. Thirdly, reduce file size.
Abstract: The Data Mining refers to extracting or “mining” knowledge from large amounts of data. The Association Rule the one of technique to knowledge discovery. The Association Rule learning is a popular and well researched method for discovering interesting relations between variables in large databases. One of the most famous association rule learning algorithms is Apriori rule. Apriori algorithm is one of algorithms for generation of association rules. The drawback of Apriori Rule algorithm is the number of time to read data in the database equal number of each candidate is generated. Many research papers have been published trying to reduce the amount of time needed to read data from the database. In this paper, we propose a new algorithm that will work rapidly. Boolean Algebra Compress technique for Association Rule Mining (B-Compress). This algorithm adopts three major ideas. Firstly, compress data. Secondly, reduce the amount of times to scan database tremendously. Thirdly, reduce file size. The construction method of Boolean Algebra Compress technique for association rule mining has ten times higher mining efficiency in execution time than Apriori Rule.

Journal ArticleDOI
TL;DR: A new object-oriented method for the high spatial resolution remote sensing images classification, including spectral statistics, image segmentation, feature space optimization and fuzzy classification, and so on is proposed.
Abstract: The urban areas have rich spectral characteristics in high spatial resolution remote sensing images. Object-oriented technology has an extensive application in thematic information extraction and image classification fields. In terms of the extraction characteristics of object-oriented technology, this paper propose a new object-oriented method for the high spatial resolution remote sensing images classification, including spectral statistics, image segmentation, feature space optimization and fuzzy classification, and so on. Taking the SPOT5 image and Google Earth image as an example, the classification result was tested. The experiments results show that the object-oriented method has the advantage of high precision, rarely exists mistakenly and good quality of classification image.


Journal ArticleDOI
TL;DR: A survey on Fuzzy Logic applications for Knowledge Discovery (KD), focusing on Information Retrieval (IR) and Information Extraction (IE), and introduces a huge variety of FL applications, using the two main existing approaches: the applications based on the Vector Space Model (VSM); and the applicationsbased on ontologies.
Abstract: In this paper, we present a survey on Fuzzy Logic (FL) applications for Knowledge Discovery (KD), focusing on Information Retrieval (IR) and Information Extraction (IE). KD has been widely used for the search of information in vague, imprecise and noisy environments. Computational Intelligence, and mainly Fuzzy Logic, emerges as an ideal tool for IR and IE systems. We introduce a huge variety of FL applications, using the two main existing approaches: the applications based on the Vector Space Model (VSM); and the applications based on ontologies. For VSM applications, we split the applications into those related to queries, clustering, user profiles and hierarchical relations. Meanwhile, we also consider ontology applications, later focusing in the semantic web. However, all these applications are closely related.

Journal ArticleDOI
TL;DR: The physical limitations of the BFWA channels over different values of M and the performance difference between the two type of detection (coherent and non-coherent) are shown.
Abstract: In this paper we examined the broadband fixed wireless access (BFWA) systems by coherent and non-coherent detection using M-ary PSK with Gray code. A closed form for the exact symbol error rate and bit error rate (BER) of M-ary PSK is presented. We show through analysis the physical limitations of the BFWA channels over different values of M (2,4,8,16,32) and the performance difference between the two type of detection (coherent and non-coherent).

Journal ArticleDOI
TL;DR: A hybrid particle swarm algorithm is proposed to minimize the makespan of job-shop scheduling problem which is a typical non-deterministic polynomial-time (NP) hard combinatorial optimization problem.
Abstract: In this paper, a hybrid particle swarm algorithm is proposed to minimize the makespan of job-shop scheduling problem which is a typical non-deterministic polynomial-time (NP) hard combinatorial optimization problem. The new algorithm is based on the principle of particle swarm optimization (PSO). PSO as an evolutionary algorithm, it combines coarse global search capability (by neighboring experience) and local search ability. Simulated annealing (SA) as a neighborhood search algorithms, it has strong local search ability and can employ certain probability and can to avoid becoming trapped in a local optimum. Three neighborhood SA algorithms is designed and combined with PSO(called HPSO), for each best solution that particle find, SA is performed on it to find it’s best neighbor solution. The effectiveness and efficiency of HPSO are demonstrated by applying it to 43 benchmark job-shop scheduling problems. Comparison with other researcher’s results indicates that HPSO is a viable and effective approach for the job-shop scheduling problem.

Journal ArticleDOI
TL;DR: The proposed scheme has better adaptability, stronger robustness and set-point tracking performance for the complex and nonlinear time-varying greenhouse climate control system, and may provide a valuable reference to formulate environmental control strategies for actual application in greenhouse production.
Abstract: This paper presents a model reference adaptive PD control scheme based on RBF neural network for the greenhouse climate control problem. A model of nonlinear conservation laws of enthalpy and matter between numerous system variables affecting the greenhouse climate is used to validate the proposed control scheme. Compared with the conventional adaptive PD control scheme based on RBF neural network, the proposed scheme has better adaptability, stronger robustness and set-point tracking performance for the complex and nonlinear time-varying greenhouse climate control system, and it may provide a valuable reference to formulate environmental control strategies for actual application in greenhouse production.

Journal ArticleDOI
TL;DR: An efficient distributed clustering algorithm based on FCM is presented by incorporating the neighborhood sensor spatial information into the FCM algorithm (FCMS) to meet sensor data correlation increasing with decreasing spatial separation.
Abstract: In practical applications, wireless sensor network (WSN) generate massive data streams with the spatial and sensor measurements information, and moreover, energy source of sensors is usually limited. Therefore, minimizing sensors energy expenditure and consequently extending the network lifetime is the major challenge in WSN. This paper presents an efficient distributed clustering algorithm based on FCM by incorporating the neighborhood sensor spatial information into the FCM algorithm (FCMS) to meet sensor data correlation increasing with decreasing spatial separation. FCMS can overcome the disadvantages of the known fuzzy c-means algorithms and at the same time enhances the clustering performance. The major characteristic of FCMS is the use of a fuzzy local ((both spatial and sensing measurements) similarity measure, aiming to partitions the sensor data into a set of spatial regions with similar sensing measurements. FCMS is initialized by Subtractive Clustering algorithm, in which the number of cluster and the cluster centers is taken to the FCMS Algorithm. The distributed FCMS (DFCMS) forms clusters of the sensor nodes sensing similar values and transmits features from the created local clusters as opposed to raw data per sensor node as in central clustering algorithm. Thus, DFCMS can significantly reduce the number of transmissions, which results in energy savings. Simulations reveal that DFCMS algorithm is effective and efficient with significantly less energy than that required by central FCM algorithm.