scispace - formally typeset
Search or ask a question

Showing papers in "International Review on Computers and Software in 2013"


Journal ArticleDOI
TL;DR: This paper proposes algorithms to perform load balancing and admission control using multipath routing in Wireless Mesh Networks (WMNs) and describes how this process is triggered during the arrival of new traffic request.
Abstract: In wireless mesh networks, admission control and congestion mitigation are the essential techniques along with routing issues. Multi path routing can be used for efficiently allocating the resources and balancing the load. In this paper we propose algorithms to perform load balancing and admission control using multipath routing in Wireless Mesh Networks (WMNs). Admission control process is triggered during the arrival of new traffic request which is accepted or rejected based on the value of requested bandwidth in comparison with the available bandwidth. When we observe congestion in any of the links along the primary path, load balancing is performed in which the traffic is distributed along the least loaded alternate paths.

38 citations


Journal ArticleDOI
TL;DR: The aim of this paper is to present the new mining technique that uses the fuzzy multiple regression analysis techniques with fuzzy concepts to managing the risks in a software project and reducing risk with software process improvement.
Abstract: This Regardless how much effort we put for the success of software projects, many software projects have very high failure rate. Risk is not always avoidable, but it is controllable. The aim of this paper is to present the new mining technique that uses the fuzzy multiple regression analysis techniques with fuzzy concepts to managing the risks in a software project and reducing risk with software process improvement. Top ten software risk factors in design phase and thirty risk management techniques were presented to respondents. The results show that all risks in software projects were important in software project manager perspective, whereas all risk management techniques are used most of time, and often. However, these mining tests were performed using fuzzy multiple regression analysis techniques to compare the risk management techniques to each of the software risk factors to determine if they are effective in mitigating the occurrence of each software risk factor by using statistical package for the Social Science (SPSS) for Manipulating and analyzing the data set, MATLAB 7.12.0 (R2011a), wolfram mathematic 9.0,. Also ten top software risk factors were mitigated by using risk management techniques except Risk 3 “Developing the Wrong User Interface”. We referred the risk management techniques were mitigated on software risk factors in Table XV. The study has been conducted on a group of software project managers. Successful project risk management will greatly improve the probability of project success

18 citations


Journal ArticleDOI
TL;DR: This research is regarding the application of a vision algorithm to identify the operations of a system in order to control the decision making concerning jobs and work pieces recognition that are to be made during system operation in real time.
Abstract: This research is regarding the application of a vision algorithm to identify the operations of a system in order to control the decision making concerning jobs and work pieces recognition that are to be made during system operation in real time. These paper stresses on the vision algorithm used which mainly focus on the shape matching properties to identify defects occur on the product. A new supervised defect detection approach to detect a class of defects in gluing application is proposed. Creating of region of interest in important region of object is discussed. Gaussian smoothing features in determining better image processing and template matching in differentiates between reference and tested image are proposed. This scheme provides high computational savings and results in high defect detection recognition rate. The defects are broadly classified into three classes: 1) gap defect; 2) bumper defect; 3) bubble defect. The defects occur provides with information of height (z-coordinate), length (y-coordinate) and width (x-coordinate). This information gathered from the proposed two camera vision system for conducting 3D transformation. Information gathers used in new correction technique known as Correction of Defect (CoD) where rejected object will be altered to reduce rejected object produced from the system.

16 citations


Journal ArticleDOI
TL;DR: This paper analyzes a new password-based user authentication scheme in hierarchical wireless sensor networks and shows that it has some pitfalls and can be more efficient through optimize the authentication process, then proposes an enhanced scheme to overcome the inherent weakness.
Abstract: Recently, Das et al. proposed a new password-based user authentication scheme in hierarchical wireless sensor networks and claimed this scheme is security against different kinds of attack. However, in this paper, we analyze their scheme and show that their scheme has some pitfalls and can be more efficient through optimize the authentication process. Then we propose an enhanced scheme to overcome the inherent weakness. The enhanced scheme achieves more secure and efficient in WSNs.

16 citations


Journal ArticleDOI
TL;DR: A fault detection scheme, based on the information redundancy, for the AES, that allows a trade-off between the hardware overhead and the security of the AES and is implemented on Xilinx Virtex-5 FPGA.
Abstract: Fault injection attacks are powerful cryptanalysis techniques against the Advanced Encryption Standard (AES) algorithm. These attacks are based on injecting faults into the structure of the AES to obtain confidential information. To protect the AES implementation against these attacks, a number of countermeasures have been proposed. In this paper, we proposed a fault detection scheme, based on the information redundancy, for the AES. We discuss the strengths and the weaknesses of this scheme against the fault attacks. Moreover, we conduct a comparative study between fault detection schemes from the literature in terms of fault detection capabilities and implementation cost. The simulation results show that the fault coverage achieves 99.998% for the proposed scheme. Moreover, the proposed detection scheme has been implemented on Xilinx Virtex-5 FPGA. Its fault coverage, area overhead, throughput and frequency degradation have been compared and it is shown that the proposed scheme allows a trade-off between the hardware overhead and the security of the AES.

14 citations


Journal ArticleDOI
TL;DR: An optimal radius algorithm and hybrid Particle Swarm Optimization (PSO) algorithm for wireless sensor network and simulation results display that the proposed algorithm extends the life time of the network by reducing the number of dead nodes when compared to basic PSO and LEACH algorithm.
Abstract: Wireless sensor network (WSN) consists of sensor nodes which are spatially distributed for monitoring physical or environmental applications. In these networks the nodes have power source that is difficult to replace. Hence energy conservation is an important factor in the network in order to prolong the lifetime of the network. In this paper two clustering based algorithms, a heuristic and an evolutionary algorithm for WSN are discussed. This paper proposes an optimal radius algorithm and hybrid Particle Swarm Optimization (PSO) algorithm for wireless sensor network. These proposed algorithms are involved for proper selection of cluster head to increase the life time of the WSN. The simulation results display that the proposed algorithm extends the life time of the network by reducing the number of dead nodes when compared to basic PSO and LEACH algorithm. It also has better throughput and high residual energy when compared to LEACH and PSO.

12 citations


Journal ArticleDOI
TL;DR: This work presents an automated method to classify the carotid artery abnormalities by determining an appropriate Region of Interest (ROI), extracting the texture features, and calculating the Intima-Media Thickness (IMT) of the Common Carotid Artery.
Abstract: The Cardio Vascular Disease (CVD) is the most fatal disease in the world in terms of deaths. The cardio vascular disease, associated with stroke and heart attack, is mainly caused by the increase in calcium deposition in the carotid artery. The Intima-Media Thickness (IMT) of the Common Carotid Artery (CCA) is widely used as an early indicator of CVD. The risk of CVD varies with age groups and this can be categorized based on the texture pattern of image of the carotid artery. This work presents an automated method to classify the carotid artery abnormalities by determining an appropriate Region of Interest (ROI), extracting the texture features, and calculating the IMT. The Ultrasound specimen image is acquired, intensity normalized, pre-processed to remove the speckle noise and then segmented. The texture analysis for segmented images is done using AM – FM techniques. The instantaneous values of the amplitude and frequency of each image specimen is obtained and it is quantized. It is then compared with the standard texture pattern, to identify whether the artery is normal or abnormal. Simulation results shows significant texture differences between the higher-risk age group of >60 years and the lower-risk age group of <50 and the 50-60 age group. Detecting the level of CVD was done by measuring the IMT. The overall process aims at implementing a fully automated system which helps in avoiding human errors, while measuring these values. The measurement technique is described in detail, highlighting the advantages compared to other methods and reporting the experimental results. Finally, the intrinsic accuracy of the system is estimated by an analytical approach. It also decreases inter-reader bias, potentially making it applicable for use in cardiovascular risk assessment.

12 citations


Journal ArticleDOI
TL;DR: An optimized implementation of H264/AVC video encoder on a single core among the six cores of TMS320C6472 DSP for Common Intermediate Format (CIF) (352x288) resolution is presented in order to move afterwards to a multicore implementation for standard and high definitions (SD,HD).
Abstract: Real-time H264/AVC high definition video encoding represents a challenging workload to most existing programmable processors The new technologies of programmable processors such as Graphic Processor Unit (GPU) and multicore Digital signal Processor (DSP) offer a very promising solution to overcome these constraints In this paper, an optimized implementation of H264/AVC video encoder on a single core among the six cores of TMS320C6472 DSP for Common Intermediate Format (CIF) (352x288) resolution is presented in order to move afterwards to a multicore implementation for standard and high definitions (SD,HD) Algorithmic optimization is applied to the intra prediction module to reduce the computational time Furthermore, based on the DSP architectural features, various structural and hardware optimizations are adopted to minimize external memory access The parallelism between CPU processing and data transfers is fully exploited using an Enhanced Direct Memory Access controller (EDMA) Experimental results show that the whole proposed optimizations, on a single core running at 700 MHz for CIF resolution, improve the encoding speed by up to 4291% They allow reaching the real-time encoding 25 f/s without inducing any Peak Signal to Noise Ratio (PSNR) degradation or bit-rate increase and make possible to achieve real time implementation for SD and HD resolutions when exploiting multicore features

8 citations


Journal ArticleDOI
TL;DR: This paper proposed modified Harmony Search Algorithm (HAS) for cluster head selection in WSN, which is seen to provide better performance than direct transmission, fundamental clustering protocol Low Energy Adaptive Clustering Hierarchy (LEACH), and Harmony search Al algorithm (HSA).
Abstract: In wireless sensor network (WSN) the sensors are spread in a particular area for monitoring the certain events like environmental monitoring, medical monitoring, surveillance, security applications and many others. But the main concern is related to the lifetime of network that depends on the battery or energy unit of sensor nodes. Many algorithms are being developed to overcome this problem. One of fundamental and efficient method is clustering among those. The work reported herein investigates energy efficient algorithms for WSN. This paper proposed modified Harmony Search Algorithm (HAS) for cluster head selection in WSN, which is seen to provide better performance than direct transmission, fundamental clustering protocol Low Energy Adaptive Clustering Hierarchy (LEACH), and Harmony Search Algorithm (HSA). The performance metrics like network lifetime, throughput and total energy consumption have been analysed and compared for the above mentioned algorithms.

8 citations


Journal ArticleDOI
TL;DR: Basic principles of ARSA are presented and use of this ARSA in examples for embedded system, e-learning management system and information system are presented.
Abstract: The success of software systems depends on their ability to respond to changing conditions: repairing discovered errors, extending system with new services, i.e. solving of the current problems and new requirements. A successful response to them depends not only on skills and knowledge of the team that is responsible for these changes in the system, but also it depends on the software itself. A good feature of systems is the ability to adapt to new conditions, or at least provide enough of the necessary knowledge to help automaticly or interactively to implement succesfully all needed changes in the system. This ability can be realised by integration of critical knowledge into the executable auto-reflexive software architecture (ARSA). ARSA can include layer of knowledge containing suitable UML models from processes of analysis and design. Paper presents basic principles of ARSA and use of this ARSA in examples for embedded system, e-learning management system and information system.

7 citations


Journal ArticleDOI
TL;DR: Signature verification requires storing templates in the database, which threatens the security of the system from being stolen, to being vulnerable to the template playback attack that may give an attacker an invalid access to the system.
Abstract: Handwritten signature biometric is considered as a noninvasive and nonintrusive process by the majority of the users. Furthermore, it has a high legal value for document authentication, as well as being dependent on by both commercial transactions and governmental institutions. Signature verification requires storing templates in the database, which threatens the security of the system from being stolen, to being vulnerable to the template playback attack that may give an attacker an invalid access to the system. Moreover, an individual cannot use his / her signature with two applications or more, otherwise, a cross matching problem will occur. The aforementioned problems can be avoided by using biometric template protection techniques for the online signature, which have been reviewed and discussed in this paper considering both protections and verification. Furthermore, the verification elaboration comprises of capture devices, pre-processing, feature extraction and classification methods.

Journal ArticleDOI
TL;DR: A reference ontology called HERO ontology, which stands for “Higher Education Reference Ontology” is described, which is projected to be a reusable and generalisable resource of academic knowledge which can be filtered to meet the needs of any knowledge-based application that requires structural information.
Abstract: Most ontologies are application ontologies designed for specific applications. Reference ontology is able to contribute significantly in resolving or at least reducing the issue of ontology applications specificity and hence increasing ontology reusability. Particularly considering higher education domain, we think that a reference ontology dedicated to this knowledge area, can be regarded as a valuable tool for researchers and institutional employees interested in analyzing the system of higher education as a whole. This paper describes, a reference ontology called HERO ontology, which stands for “Higher Education Reference Ontology”. We explain HERO ontology building process from requirements specification until ontology evaluation using NeOn methodology. HERO ontology is projected to be a reusable and generalisable resource of academic knowledge which can be filtered to meet the needs of any knowledge-based application that requires structural information. It is distinct from application ontologies in that it is not intended as an end-user application and does not target the needs of any particular user group.

Journal ArticleDOI
TL;DR: A high level modeling method based on MDA is experience to generate MVC2 web model for an E-commerce web application that can effectively simplify the development processing with less time and cost.
Abstract: Model transformations are increasingly gaining attention in software design and development. Model transformation plays a key role in Object Management group (OMG) Model Driven Architecture (MDA) initiative. In this paper, we experience a high level modeling method based on MDA to generate MVC2 web model for an E-commerce web application. This idea is based on the fact that the source metamodel is composed by two metamodels which are: the class diagram and activity diagram. In order to model the internal logic of a complex operation and accurately determine the input jsp page of an Action class and all ingredients of this class, we begin by identifying the operations in view to establish the activity diagram for each operation. After modeling, we implement the transformation rules. These rules are expressed in ATL transformation language. In our algorithm, we consider only the operations belonging to the two diagrams already cited. Practically, we transform only the operations that have an activity diagram and belonging to the class diagram. The MVC2 web model generated can effectively simplify the development processing with less time and cost.

Journal ArticleDOI
TL;DR: A swarm based defense technique for denial of sleep attack is proposed which proves to be efficient in detecting the faulty channel and consumes less energy since the information about the all the attackers can be known using ants.
Abstract: In Wireless Sensor Networks (WSN), the denial of sleep attack consumes more amount of energy which leads to depletion of battery power. This consumption of power makes the nodes more susceptible to the vulnerabilities and hence denial of service through denial of sleep. If a large percentage of network nodes, or a few critical nodes, are attacked in this way, the network lifetime can be reduced severely. In order to overcome the denial of sleep attack in this paper, we propose to develop a swarm based defense technique for denial of sleep attack. Initially an anomaly detection model is developed which determines the affected traffic between the nodes and based on this, the frequency hopping technique is initiated. Ant agents of Swarm intelligence are applied in each channel to collect the communication frequency and the frequency hopping time. Based upon the frequency hopping time the faulty channel is identified and the when the administer node gets this information it deletes the faulty channel. From our simulation results, we prove that this technique proves to be efficient in detecting the faulty channel and consumes less energy since the information about the all the attackers can be known using ants.

Journal ArticleDOI
TL;DR: A method for prognosis of primary open-angle glaucoma (POAG) using the mathematical apparatus of Markov processes is developed and makes unmistakably in 16 surveyed patients.
Abstract: A method for prognosis of primary open-angle glaucoma (POAG) using the mathematical apparatus of Markov processes is developed in the article. The mathematical apparatus of Markov processes with discrete states and discrete time was used to describe the course of glaucoma. According to the clinical approbation of the proposed method, the prognosis was made unmistakably in 16 surveyed patients. Prognosis was confirmed in 82% of cases. The proposed method increases the prognosis of POAG development significantly. The introduction of this method for POAG prognosis in ophthalmology practice allows improving the quality level of medical service for patients.

Journal ArticleDOI
TL;DR: A high level technique method based on JET2 to generate the code of an e-commerce web application which is a PC online shopping and is based on the combination of the U ML class diagram and the UML activity diagram.
Abstract: Code generation isn't a new concept. It's been around for a while and has been gaining popularity with the model-driven development (MDD) movement as a way to increase productivity. In this paper, we experience a high level technique method based on JET2 to generate the code of an e-commerce web application which is a PC online shopping. This technique is based on the combination of the UML class diagram and the UML activity diagram. In the algorithm of transformation, we consider only the operations belong to the two diagrams already combined. Practically, we transform only the operations that have an activity diagram and belong to the class diagram. In this technique method, we begin by written the transformation rules in ATL transformation language in order to generate the MVC2 web model. In The second step, we use the generated model as an input file of JET2 to generate the code of the e-commerce web application. Also, it presents a case study to illustrate this proposal.

Journal ArticleDOI
TL;DR: This paper asserts that this difficulty in creating meaningful component models has originated from the use of UML diagrams; furthermore, the paper proposes an alternative flow-based diagramming methodology for software architecture development that is viable through a study case of an actual system.
Abstract: Information system architecture handles requirements of information and systems to support a business through describing structural and behavioral capabilities of the system. Software architecture is defined in terms of computational components and interactions among those components. In this context, component-based development is an approach considered the most promising way of developing information systems. Typically, UML 2 component diagrams are used as an architecture-level artifact for modeling business and technical software architectures. Nevertheless, UML notation in and of itself is insufficient for creating meaningful component models. This paper asserts that this difficulty in creating meaningful component models has originated from the use of UML diagrams; furthermore, the paper proposes an alternative flow-based diagramming methodology for software architecture development. The new concept presented in the paper is development of a hierarchy of components on the basis of this flow-based representation. The viability of this idea is demonstrated through a study case of an actual system.

Journal ArticleDOI
TL;DR: The proposed Enhanced Optimized Polymorphic Hybrid Multicast Routing Protocol (EOPHMR) save more energy than proactive, reactive and hybrid and performance comparison of results on energy consumption and saved energy for reactive, reactive, hybrid and EOPH MR are presented.
Abstract: An efficient protocol is modified in improving the metric: energy. Clustering is performed in mobile adhoc network (MANET) with two approaches: static clustering and dynamic clustering. The goal is to achieve improved energy life time resulting in dynamic clustering for the specified metric. The protocol in proactive, reactive and hybrid mode for static clustering possess more energy consumption. The proposed Enhanced Optimized Polymorphic Hybrid Multicast Routing Protocol (EOPHMR) save more energy than proactive, reactive and hybrid. The performance comparison of results on energy consumption and saved energy for proactive, reactive, hybrid and EOPHMR are presented.

Journal ArticleDOI
TL;DR: This paper presents a fully automated method for the segmentation of cerebrospinal fluid and internal brain nuclei from T1-weighted MRI head scans, and depicts that the proposed methodology shows better segmentation results with some other existing techniques like FAST, SPM5, k-nearest neighbor (k-NN) classifier, and a conventional k-NN.
Abstract: Brain tissue segmentation on structural Magnetic Resonance Imaging (MRI) has received considerable attention. Quantitative analysis of MR images of the brain is of interest in order to study the aging brain in epidemiological studies, to better understand how diseases affect the brain and to support diagnosis in clinical practice. Manual quantitative analysis of brain imaging data is a tedious and time-consuming procedure, prone to observer variability. Therefore, there is a large interest in automatic analysis of MR brain imaging data, especially segmentation of Cerebrospinal Fluid (CSF), Gray Matter (GM) and White Matter (WM). This paper presents a fully automated method for the segmentation of cerebrospinal fluid and internal brain nuclei from T1-weighted MRI head scans. The proposed methodology performs intensity based thresholding to get the boundaries between gray matter, white matter, cerebrospinal fluid and others. Combined with preprocessing techniques and incorporating mathematical morphology, we first perform the extraction of brain cortex. Subsequently, the cerebrospinal fluid is segmented by using orthogonal polynomial transform. Finally, the gray matter and the white matter regions in the MRI are segmented based on the intensity values. Experimental results show that the proposed method achieves reasonably good segmentation. The comparative analysis depicts that the proposed methodology shows better segmentation results with some other existing techniques like FAST, SPM5, k-nearest neighbor (k-NN) classifier, and a conventional k-NN.

Journal ArticleDOI
TL;DR: The problem of barrier synchronization between parallel processes in VLSI-based mesh-connected multicomputers is under consideration and hardware synchronization methods (HSM) in modern microprocessing systems are reviewed.
Abstract: The problem of barrier synchronization between parallel processes in VLSI-based mesh-connected multicomputers is under consideration. This paper introduces and reviews description of hardware synchronization methods (HSM) in modern microprocessing systems.

Journal ArticleDOI
TL;DR: A novel clustering algorithm called HDED (hybrid distributed, energy-efficient, and dual homed clustering Algorithm), derived from DED (distributed,Energy efficient, andDual homing clustering) which aims some changes on this protocol to increase its performance.
Abstract: Wireless sensor networks are composed by large number of small battery powered sensors distributed in an environment. They are responsible to monitor and transmit its physical characteristics. These networks require robust wireless communication protocols that are energy efficient. Thus, it is a challenge for the self organization protocols to provide network survivability and redundancy features. In this paper, we present a novel clustering algorithm called HDED (hybrid distributed, energy-efficient, and dual homed clustering Algorithm), derived from DED (distributed, energy-efficient, and dual homed clustering) which aims some changes on this protocol to increase its performance. Better coverage, energy efficiency, minimum traffic from nodes to base station, balanced energy consumption are the main features of HDED to improve life time of WSN. Simulation results confirm that HDED is effective in prolonging the network lifetime as well as in improving throughput, than DED and EDED.

Journal ArticleDOI
TL;DR: HGATS-BPC successfully optimized and enhanced the performance of the BPC in term of accuracy and outperformed the traditional BPC and previous methodologies by obtaining more accurate results but with a high cost of computational time.
Abstract: In this research our goal is to develop a hybrid approach for optimizing and enhancing back- propagation classifier (BPC) performance using a Memetic Algorithm (Genetic algorithm and Tabu Local Search), thus; Memetic Algorithm used to tune the parameters of the BPC. The proposed hybrid approach (HGATS-BPC) was tested based on fish images recognition. To recognize the pattern of interest (fish object) in the image based on extracted features from color signature. Histogram technique and Gray Level Co-occurrence Matrix (GLCM) used to extract 20 features from fish images based on color signature. The general BPC has several issues to be optimized by Memetic Algorithm such as speed, and easy for running into local minimum. In our study we used 800 fish images for 20 different fish families; each family has a different number of fish types. These images are divided into two datasets: 560 training images and 240 testing images. HGATS-BPC successfully optimized and enhanced the performance of the BPC in term of accuracy and outperformed the traditional BPC and previous methodologies by obtaining more accurate results but with a high cost of computational time compared to the BPC. The overall accuracy obtained by BPC was 85%, while the HGATS-BPC obtained 94% based on 800 fish images. Finally; the HGATS-BPC classifier is able to classify the given fish images into poisonous or non-poisonous fish and classify the poisonous and non-poisonous fish into its family.

Journal ArticleDOI
TL;DR: A storage scheme that can process tracking queries and path oriented queries efficiently on an RDBMS and a method that translates the queries to SQL queries that is implemented and the results are compared against the conventional RFID path encoding scheme.
Abstract: An RFID technology is most important process in supply chain management. In order to know the movements of products in supply chain management, an RFID tag is attached to a product. If the product with an RFID tag moves or stays near the detection region, RFID readers will detect RFID tags information. If the data volume gets increased, the processing of such RFID data becomes complex and hence storing and retrieving data to be very challenging. Nowadays, the RFID data related research works, the complexity reduction on processing such RFID data remains a big challenge. Hence to avoid such negative aspects a new RFID data encoding scheme is proposed in the supply chain management. To reduce the complexity of the processing the RFID data’s, or proposed method utilizing orthogonal transformation and optimization model during the raw data processing. Initially, the RFID data to be processed by the orthogonal transformation and that the data’s are optimally clustered by the well renowned optimization technique is named as GA. After that, the clustered data’s are encoded by exploiting the path encoding and order encoding schemes. Based on the path encoding scheme and the order encoding scheme, we devise a storage scheme that can process tracking queries and path oriented queries efficiently on an RDBMS. Finally, we propose a method that translates the queries to SQL queries. The proposed method is implemented and the queries are efficiently retrieved and the results are compared against the conventional RFID path encoding scheme.

Journal ArticleDOI
TL;DR: A MDA-based model-driven approach to generate the GUI for mobile applications using UML, which has the advantages to give a graphical way for designing under UML.
Abstract: Developing applications for mobile platforms is a compound task, due to variability of mobile OSs and the number of different devices that need to be supported. Model-Driven Architecture (MDA) approach could provide a possible solution to offer an automated way to generate a Graphical User Interface (GUI) for such applications. In this paper, we propose a MDA-based model-driven approach to generate the GUI for mobile applications. The adopted approach consists of four main steps (i) modeling the GUI under UML; (ii) transforming the obtained diagrams to a simplified XMI schema; (iii) model-to-model transformation; and (iv) model-to-code generation. Our method has the advantages to give a graphical way for designing under UML. Currently, the method has been implemented to support two platforms Android and BlackBerry. The applicability of the approach is demonstrated via a case study that illustrates the GUI code generation for mobile platforms.

Journal ArticleDOI
TL;DR: A Double Cluster Head Clustering Algorithm is presented whose basis is Particle Swarm Optimization PSO, which is capable of extending the longevity of large sensor networks.
Abstract: The energy efficiency and optimization of Network lifetime are the most important design criterion in Wireless Sensor Networks (WSNs). This paper a Double Cluster Head Clustering Algorithm is presented whose basis is Particle Swarm Optimization PSO, which is capable of extending the longevity of large sensor networks. Dual Cluster Head clustering technique using PSO (D-PSO) employs two cluster heads which can be denoted as Primary Cluster Head (PCH) and Secondary Cluster Head (SCH). PCH is responsible for data collection and data aggregation from the cluster member nodes and SCH is responsible for sending the aggregated data to the sink. This method aims at dividing the workloads between the two cluster heads which prevents a single cluster head from draining out of energy thereby extending the Cluster head re-election cycle. This method balances the overall energy consumption and improves the network lifetime considerably. The key thing in swarm optimization technique is to keep an efficient balance between the exploration and the exploitation abilities of the swarm. To realize this point a simple modified version of D-PSO is presented to achieve further improvement in the protocol performance. This protocol is then compared with various extended versions of Leach protocol and the effect of all these protocols is studied collectively to optimize the network lifetime.

Journal ArticleDOI
TL;DR: All health sciences manuscripts should be tested through plagiarism detection system before accepting them for publications.
Abstract: There are many available algorithms for plagiarism detection in natural languages. Generally, these algorithms belong to main categories including plagiarism detection algorithms which is based on fingerprint and also plagiarism detection algorithms which is based on content comparison that contains string matching and tree matching algorithms. Available systems of plagiarism detection usually use specific types of detection algorithms or mixture of detection algorithms to achieve effective detection systems (fast and accurate). On rhetorical structure theory a system for plagiarism detection in Arabic and English health sciences publications has been developed using Bing search engine; Conclusion, all health sciences manuscripts should be tested through plagiarism detection system before accepting them for publications

Journal ArticleDOI
TL;DR: A novel routing protocol is proposed, that integrates the Adaptive Fidelity Energy Conserving Algorithm with Zone routing protocol, that reduces the per-node energy consumption and packet delivery ratio and is compared using Ns-2 simulation tool.
Abstract: The Mobile Ad-hoc Networks can be determined by random movement of nodes and absence of both, clock synchronization mechanism and infrastructure between them. Due to its unique characteristics, routing protocol design for MANET is a major challenge. Compact batteries are sources of power supply to these mobile stations .hence the objective is to produce energy efficient routing protocol without compromising on the quality of sevice.In this paper a novel routing protocol is proposed, that integrates the Adaptive Fidelity Energy Conserving Algorithm with Zone routing protocol. The parameters per-node energy consumption and packet delivery ratio of both protocols are compared using Ns-2 simulation tool.

Journal ArticleDOI
TL;DR: This work is one of the first attempts to critically analyze such papers in which dynamic data allocation algorithm is used in distributed environment and suggests future research directions.
Abstract: The developments in database and networking technologies and demand for increasing database sizes make distributed database systems (DDS) more important in modern society. The data allocation is a prominent issue in distributed database systems as the performance of the system is heavily depend on the data it accesses from different sites. Various algorithms have been proposed for data allocation in distributed database systems. Present paper explores the existing literature in which dynamic data allocation algorithm is used in distributed environment. A systematic study is applied to find as much literature as possible. A total of 31 papers were found suitable after defined search criteria. In order to derive useful findings from these papers, the data allocation approach presented in these papers are evaluated based on the various key parameters viz. performance efficiency, implementation technique, validation, usability, comparative analysis, and extendibility. This work is one of the first attempts to critically analyze such papers and suggests future research directions. The intension in the proposed work is to give a score for each data allocation algorithm proposed by the researchers based on the selected key parameters, but definitely not to criticize any research contribution by authors.

Journal ArticleDOI
TL;DR: This paper discusses and criticizes the strategies and policies of load balancing in cloud computing environment, then it compares them using different parameters such as throughput, availability, resources utilization, scalability etc, and illustrates the multiple tools that have been used to implement and simulate these techniques.
Abstract: Nowadays cloud computing is considered as the latest network infrastructure that supports large-scale decentralized computing Its components rely on different design aspect of multiple mature networks structures (cluster commodity, utility computing, virtualization, Datacenters, and grid structure) The ultimate goal of cloud technology is to efficiently afford services to the users using the concept (Pay-for- Use) which implies cutting down the infrastructure costs when setting up a new business It also makes the data that users need available and accessible in a way that does not require any kind of knowledge of the underlying infrastructure and meets the Service Level Agreement (SLA) requirements One of the key problems in cloud computing is load balancing since it should ensure fair and dynamic distribution of loads among all computation nodes and harnesses the minimum power consumption Load balancing facilitates resources utilization, sustains the minimum response time and guarantees data availability This paper first, discusses and criticizes the strategies and policies of load balancing in cloud computing environment, then it compares them using different parameters such as throughput, availability, resources utilization, scalability etc It also illustrates the multiple tools that have been used to implement and simulate these techniques under various experiments and environments

Journal ArticleDOI
TL;DR: Transmission mechanism of a sensor node is analyzed with modulation schemes and error control codes and the result shows that 16-FSK with Golay codes in AWGN and 16-QAM with Golays codes in Rayleigh channel is more energy efficient than other combination of modulation and coding techniques for energy efficient sensor data transmission.
Abstract: A wireless sensor network (WSN) consists of several sensor nodes to monitor physical or environmental conditions. The development of WSNs is motivated by several applications such as military surveillance, industrial and consumer applications. In this paper transmission mechanism of a sensor node is analyzed with modulation schemes and error control codes. It furthermore compared based on channel condition in a sensor network. The modulation schemes considered for this work including 16-PSK, 16-PAM, 16-QAM and 16-FSK along with convolutional, Golay and RS codes under both AWGN (Additive White Gaussian Noise) and Rayleigh fading channels. To maximize the lifetime of WSN, appropriate combination of modulation scheme with error control codes is chosen for sensor data transmission. The result shows that 16-FSK with Golay codes in AWGN and 16-QAM with Golay codes in Rayleigh channel is more energy efficient than other combination of modulation and coding techniques for energy efficient sensor data transmission