scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Computer Science in 2012"


Journal ArticleDOI
TL;DR: A novel type of approach is outlined to be developed in the future, taking into account the generic components of a news story in order to generate a better summary.
Abstract: Problem statement: Text summarization can be of different nature ranging from indicative summary that identifies the topics of the document to informative summary which is meant to represent the concise description of the original document, providing an idea of what the whole content of document is all about. Approach: Single document summary seems to capture both the information well but it has not been the case for multi document summary where the overall comprehensive quality in presenting informative summary often lacks. It is found that most of the existing methods tend to focus on sentence scoring and less consideration is given to the contextual information content in multiple documents. Results: In this study, some survey on multi document summarization approaches has been presented. We will direct our focus notably on four well known approaches to multi document summarization namely the feature based method, cluster based method, graph based method and knowledge based method. The general ideas behind these methods have been described. Conclusion: Besides the general idea and concept, we discuss the benefits and limitations concerning these methods. With the aim of enhancing multi document summarization, specifically news documents, a novel type of approach is outlined to be developed in the future, taking into account the generic components of a news story in order to generate a better summary.

61 citations


Journal ArticleDOI
TL;DR: The results prove that the use of machine learning approach to classify NER from Arabic text based on neural network technique is capable to recognize named entities of Arabic texts.
Abstract: Problem statement: Named Entity Recognition (NER) is a task to identi fy proper names as well as temporal and numeric expressions, in an ope n-domain text. The NER task can help to improve the performance of various Natural Language Processing (NLP) applications such as Information Extraction (IE), Information Retrieval (IR) and Que stion Answering (QA) tasks. This study discusses on the Named Entity Recognition of Arabic (NERA). The motivation is due to the lack of resources for Arabic named entities and to enhance the accuracy t hat has been reached in previous NERA systems. Approach: This system is designed based on neural network ap proach. The main task of neural network approach is to automatically learn to recog nize component patterns and make intelligent decisions based on available data and it can also b e applied to classify new information within large databases. The use of machine learning approach to classify NER from Arabic text based on neural network technique is proposed. Neural network approach has performed successfully in many areas of artificial intelligence. The system involves three stages: the first stage is pre-processing that clea ns the collected data, the second involves converting Arab ic letters to Roman alphabets and the final stage applies neural network to classify the collected da ta. Results: The accuracy of the system is 92 %. The system is compared with decision tree using the sam e data. The results showed that the neural network approach achieved better than decision tree. Conclusion: These results prove that our technique is capable to recognize named entities of Arabic texts .

51 citations


Journal ArticleDOI
TL;DR: The results show the potential of SVM technique in classifying poems into various classification of which previous approaches only focused on classifying prose only.
Abstract: Problem statement: Traditional Malay poetry called pantun is a form of art to express ideas, emotions and feelings in the form of rhyming lines. Malay poetry usually has a broad and deep meaning making it difficult to be interpreted. More over, few efforts have been done on automatic classification of literary text such as poetry. Approach: This research concerns with the classification of Malay pantun using Support Vector Machines (SVM). The capability of SVM through Radial Basic Function (RBF) and linear kernel function are imple mented to classify pantun by theme, as well as poetry or non-poetry. A total of 1500 pantun are di vided into 10 themes with 214 Malaysian folklore documents used as the training and testing datasets . We used tfidf for both classification experiments and the shape feature for the classification of poe try and non-poetry experiment alone. Results: The results of each experiment showed that the linear k ernel achieved a better percentage of average accuracy compared to the RBF kernel. Conclusion: The results show the potential of SVM technique in classifying poems into various classification of which previous approaches only focused on classifying prose only.

50 citations


Journal ArticleDOI
TL;DR: The proposed Iris Localization based on morphological or set theory which is well in shape detection is better than the previous method and is proved by the results of different parameters.
Abstract: This study involves the Iris Localization based on morphological or set theory which is well in shape detection. Principal Component Analysis (P CA) is used for preprocessing, in which the removal of redundant and unwanted data is done. Applications such as Median Filtering and Adaptive thresholding are used for handling the variations i n lighting and noise. Features are extracted using Wavelet Packet Transform (WPT). Finally matching is performed using KNN. The proposed method is better than the previous method and is proved by th e results of different parameters. The testing of t he proposed algorithm was done using CASIA iris database (V1.0) and (V3.0).

45 citations


Journal ArticleDOI
TL;DR: Simulation results showed that WDM FSO system may be a good candidate to solve the last mile problem and also it has capability to accommodate the channels more than 16 and by introducing the error correction code or balance detection, the transmission distance might be increased further.
Abstract: Problem statement: Wavelength-Division-Multiplexing (WDM) is a promising technique for meeting the growing demand for increased bandwidth and various types of services in the optical access network. For wide area or metropolitan networks, fibers are deployed to provide huge bandwidth. In access networks, the fiber-to-the-home will partially solve the last mile problem. However, some environmentally sensitive area such as housing areas, tower buildings and national parks are not allowed to deploy fibers. Therefore, Radio Frequency (RF) is normally used to overcome this problem. The incompatibility of RF and optical channels is now widely believed to be the limiting factor in efforts to further increase transport capabilities. Free Space Optical (FSO) communication is the technology that can address any connectivity needed in optical networks, such as core, edge, or access networks. Approach: In this project, the simulation software namely Optical System version 7 is used to simulate the design of WDM in FSO transmission. The total losses that have been considered in this design are geometric loss, transmitter and receiver loss and atmospheric attenuation which focus on nonselective scattering during heavy rainfall condition in Malaysian environment. Malaysian weather data are used to reflect the conditions particularly in tropical regions. Results: We have presented the results of 16-channels WDM at 100-GHz channel spacing. The simulated results show that this system can support a higher bit rate up to 2.5 Gbps over 2.4 km distance. Conclusion: Simulation results showed that WDM FSO system may be a good candidate to solve the last mile problem and also it has capability to accommodate the channels more than 16. By introducing the error correction code or balance detection, the transmission distance might be increased further.

41 citations


Journal ArticleDOI
TL;DR: The proposed framework, Phishing Evolving Neural Fuzzy Framework (PENFF), has proved its ability to detect phishing emails by decreasing the error rate in classification process.
Abstract: One of the broadly used internet attacks to deceive customers financially in banks and agencies is unknown “zero-day” phishing Emails “zero-day” phishing Emails is a new phishing email that it has not been trained on old dataset, not included in black list. Accordingly, the current paper seeks to Detection and Prediction of unknown “zero-day” phishing Emails by provide a new framework called Phishing Evolving Neural Fuzzy Framework (PENFF) that is based on adoptive Evolving Fuzzy Neural Network (EFuNN). PENFF does the process of detection of phishing email depending on the level of features similarity between body email and URL email features. The totality of the common features vector is controlled by EFuNN to create rules that help predict the phishing email value in online mode. The proposed framework has proved its ability to detect phishing emails by decreasing the error rate in classification process. The current approach is considered a highly compacted framework. As a performance indicator; the Root Mean Square Error (RMSE) and Non-Dimensional Error Index (NDEI) has 0.12 and 0.21 respectively, which has low error rate compared with other approaches Furthermore, this approach has learning capability with footprint consuming memory."

40 citations


Journal ArticleDOI
TL;DR: In this study, an analysis is described on the existing routing methodology which is proposed for solving various routing issues in the wireless communication.
Abstract: Problem statement: In the past, the main focus of the research in the wireless network is to provide optimal routes between source and destination nodes. The wireless routing is required additional computational effort than wired routing in order to fulfill the major wireless characteristics such as battery power constraints, frequent mobility and less processing. Approach: In addition to wired routing the wireless routing requires some characteristics, such as scalability, higher throughput, lesser packet loss and providing QoS. The wireless routing protocol is categorized based on routing update mechanism, based on routing topology and based on special resources like energy aware and location aware. Results: Routing in the wireless network is classified as proactive, reactive and hybrid routing protocols. These routing protocols are discussed in this study with the experimental result. Conclusion: In this study, an analysis is described on the existing routing methodology which is proposed for solving various routing issues in the wireless communication.

36 citations


Journal ArticleDOI
TL;DR: Simulation results show that the proposed congestion controlled adaptive multi-path routing protocol efficiently solves the problem of load balancing, network congestion and fault tolerance.
Abstract: Problem statement: Load balancing and network congestion are the major problems in Mobile Ad-hoc Networks (MANET) routing. Most of the existing routing protocols provide solutions to load balancing or congestion or fault-tolerance, individually. Approach: We propose congestion controlled adaptive multi-path routing protocol to achieve load balancing and avoid congestion in MANETs. The algorithm for finding multi-path routes computes fail-safe multiple paths, which provide all the intermediate nodes on the primary path with multiple routes to destination. The fail-safe multiple paths include the nodes with least load and more battery power and residual energy. When the average load of a node along the route increases beyond a threshold, it distributes the traffic over disjoint multi-path routes to reduce the traffic load on a congested link. Results: The proposed work is implemented in NS2 and the performance metrics like throughput, packet delivery ratio, delay and overhead are measured and compared with existing protocol. Conclusion/Recommendations: Simulation results show that the proposed algorithm efficiently solves the problem of load balancing, network congestion and fault tolerance The proposed algorithm can alos be applied over any multipath routing protocol.

36 citations


Journal ArticleDOI
TL;DR: In this algorithm, the time required to break an encryption scheme is excessive as the key size is larger and the forward and backward security is ensured with neighborhood and with message specific key for the route discovery.
Abstract: Problem statement: Mobile Ad-hoc Network is infrastructureless network supported by no fixed trusted infrastructure. The packets had a chance to drop or hacked by eavesdropper during transmission. So, encryption method is required for sending and receiving packet in secret manner. Approach: In this approach, the block and key size had been increased by 256 bits. When compared to Rijndael algorithm, it was more secure and effective. To attain security goals like: authentication, integrity, non- repudation, privacy, a secret key was necessary to be shared between the sender and receiver. For data communication, we use MAC address for exchanging packet with encrypted key exchange system. Results: For encryption, In Rijndael algorithm the whole data had to be run twice but our proposed algorithm would encrypt the whole data and run once. The encryption was done with neighborhood key and with message specific key for the enhancement of security. Conclusion: In our algorithm, the time required to break an encryption scheme is excessive as the key size is larger. Here the security is focusing the application level. The forward and backward security is ensured with neighborhood and with message specific key for the route discovery.

35 citations


Journal ArticleDOI
TL;DR: A cooperative algorithm based on PSO and k-means is presented that has an acceptable efficiency and robustness and has been used for clustering six datasets and their efficiencies are compared with each other.
Abstract: Problem statement: Data clustering has been applied in multiple fields such as machine learning, data mining, wireless sensor networks and pattern recognition. One of the most famous clustering approaches is K-means which effectively has been used in many clustering problems, but this algorithm has some drawbacks such as local optimal convergence and sensitivity to initial points. Approach: Particle Swarm Optimization (PSO) algorithm is one of the swarm intelligence algorithms, which is applied in determining the optimal cluster centers. In this study, a cooperative algorithm based on PSO and k-means is presented. Result: The proposed algorithm utilizes both global search ability of PSO and local search ability of k-means. The proposed algorithm and also PSO, PSO with Contraction Factor (CF-PSO), k-means algorithms and KPSO hybrid algorithm have been used for clustering six datasets and their efficiencies are compared with each other. Conclusion: Experimental results show that the proposed algorithm has an acceptable efficiency and robustness.

34 citations


Journal ArticleDOI
TL;DR: This study proposed a model for the Content Based Medical Image Retrieval System by using texture feature in calculating the Gray Level Co Occurrence matrix (GLCM) from which various statistical measures were computed in order to increasing similarities between query image and database images for improving the retrieval performance along with the large scalability of the databases.
Abstract: Problem statement: Recently, there has been a huge progress in collection of varied image databases in the form of digital. Most of the users found it difficult to search and retrieve required images in large collections. In order to provide an effective and efficient search engine tool, the system has been implemented. In image retrieval system, there is no methodologies have been considered directly to retrieve the images from databases. Instead of that, various visual features that have been considered indirect to retrieve the images from databases. In this system, one of the visual features such as texture that has been considered indirectly into images to extract the feature of the image. That featured images only have been considered for the retrieval process in order to retrieve exact desired images from the databases. Approach: The aim of this study is to construct an efficient image retrieval tool namely, “Content Based Medical Image Retrieval with Texture Content using Gray Level Co-occurrence Matrix (GLCM) and k-Means Clustering algorithms”. This image retrieval tool is capable of retrieving images based on the texture feature of the image and it takes into account the Pre-processing, feature extraction, Classification and retrieval steps in order to construct an efficient retrieval tool. The main feature of this tool is used of GLCM of the extracting texture pattern of the image and k-means clustering algorithm for image classification in order to improve retrieval efficiency. The proposed image retrieval system consists of three stages i.e., segmentation, texture feature extraction and clustering process. In the segmentation process, preprocessing step to segment the image into blocks is carried out. A reduction in an image region to be processed is carried out in the texture feature extraction process and finally, the extracted image is clustered using the k-means algorithm. The proposed system is employed for domain specific based search engine for medical Images such as CT-Scan, MRI-Scan and X-Ray. Results: For retrieval efficiency calculation, conventional measures namely precision and recall were calculated using 1000 real time medical images (100 in each category) from the MATLAB Workspace database. For selected query images from the MATLAB-Image Processing tool Box-Workspace Database, the proposed tool was tested and the precision and recall results were presented. The result indicates that the tool gives better performance in terms of percentage for all the 1000 real time medical images from which the scalable performance of the system has been proved. Conclusion: This study proposed a model for the Content Based Medical Image Retrieval System by using texture feature in calculating the Gray Level Co Occurrence matrix (GLCM) from which various statistical measures were computed in order to increasing similarities between query image and database images for improving the retrieval performance along with the large scalability of the databases.

Journal ArticleDOI
TL;DR: D3SLR can be successfully implemented at critical points in the network as autonomous defense systems working independently to limit damage to the victim and also allows legitimate flows towards the target system with a higher degree of accuracy.
Abstract: Problem statement: The last decade has seen many prominent Distributed Denial of Service (DDoS) attacks on high profile webservers. In this study, we deal with DDoS attacks by proposing a dynamic reactive defense system using an adaptive Spin Lock Rate control (D3SLR). D3SLR identifies malicious traffic flow towards a target system based on the volume of traffic flowing towards the victim machine. Approach: The proposed scheme uses a divide and conquer approach to identify the infected interface via which malicious traffic are received and selectively implements rate limiting based on the source of traffic flow towards victim and type of packet rather than a collective rate limiting on flow towards victim. Results: The results observed in simulation shows that D3SLR detects the onset of the attacks very early and reacts to the threat by rate limiting the malicious flow. The spin lock rate control adapts quickly to any changes in the rate of flow. Conclusion: D3SLR can be successfully implemented at critical points in the network as autonomous defense systems working independently to limit damage to the victim and also allows legitimate flows towards the target system with a higher degree of accuracy.

Journal ArticleDOI
TL;DR: The output of the simulation for the two-stage opamp shows that the PSO technique is an accurate and promising approach in determining the device sizes in an analog circuit.
Abstract: Problem statement: Day by day more and more products rely on analog circuits to improve the speed and reduce the power consumption(Products rely on analog circuits to improve the speed and reduce the power consumption day by day more and more.). For the VLSI implementation analog circuit design plays an important role. This analog circuit synthesis might be the most challenging and time-consumed task, because it does not only co nsist of topology and layout synthesis but also of component sizing. Approach: A Particle Swarm Optimization (PSO) technique for the optimal design of analog circuits. Analog signal processing finds many applications and widely uses OpAmp based amplifiers, mixers, comparators. and filters. Results: A two-stage opamp (Miller Operational Trans-conductance Amplifier (OTA)) is considered for the synthesis that satisfies certain design specifications. Performance has been evaluated with the Simulation Program with Integrated Circuit Emphasis (SPICE) circuit simulator until optimal si zes of the transistors are found. Conclusion: The output of the simulation for the two-stage opamp sh ows that the PSO technique is an accurate and promising approach in determining the device sizes in an analog circuit.

Journal ArticleDOI
TL;DR: A novel Turnaround time utility scheduling approach which focuses on both high priority and the low priority tasks that arrives for scheduling is proposed.
Abstract: Problem statement: One of the Cloud Services, Infrastructure as a Service (IaaS) provides a Compute resourses for demand in various applications like Parallel Data processing. The computer resources offered in the cloud are extremely dynamic and probably heterogeneous. Nephele is the first data processing framework to explicitly exploit the dynamic resource allocation offered by today’s IaaS clouds for both, task scheduling and execution. Particular tasks of processing a job can be assigned to different types of virtual machines which are automatically instantiated and terminated during the job execution. However, the current algorithms does not consider the resource overload or underutilization during the job execution. In this study, we have focussed on increasing the efficacy of the scheduling algorithm for the real time Cloud Computing services. Approach: Our Algorithm utilizes the Turnaround time Utility effieciently by differentiating it into a gain function and a loss function for a single task. The algorithm also assigns high priority for task of early completion and less priority for abortions/deadlines issues of real time tasks. Results: The algorithm has been implemented on both preemptive and Non-premptive methods. The experimental results shows that it outperfoms the existing utility based scheduling algorithms and also compare its performance with both preemptive and Non-preemptive scheduling methods. Conculsion: Hence, a novel Turnaround time utility scheduling approach which focuses on both high priority and the low priority tasks that arrives for scheduling is proposed.

Journal ArticleDOI
TL;DR: Performance of question answering system of getting exact result can be improved by singing semantic search methodology for retrieving answers from ontology model.
Abstract: Problem statement: Question Answering (QA) system is taking an important role in current search engine optimization concept. Natural language processing technique is mostly implemented in QA system for asking user's question and several steps are also followed for conversion of questions to query form for getting a n exact answer. Approach: This paper surveys different types of question answering system based on ontology and semantic web model with different query format. For comparison, the types of input, q uery processing method, input and output format of each system and the performance metrics with its li mitations are analyzed and discussed. Our question answering for automatic learning system architectur e is used to overcome the difficulties raised from the different QA models. Results: The semantic search methodology is implemented by using RDF graph in the application of data structure domain a nd the performance is also analyzed. Answers are retrieved from ontology using Semantic Search approach and question-to-query algorithm is evaluated in our system for analyzing performance evaluation. Conclusion: Performance of question answering system of getting exact result can be improved by u sing semantic search methodology for retrieving answers from ontology model. Our system successfully implements this technique and the system is also used in intelligent manner for automatic learn ing method.

Journal ArticleDOI
TL;DR: ACO with swap and 3-opt heuristic has the capability to tackle the Capacitated Vehicle Routing Problem with satisfactory solution quality and run time and is a viable alternative for solving the CVRP.
Abstract: Problem statement: The Capacitated Vehicle Routing Problem (CVRP) is a well-known combinatorial optimization problem which is concerned with the distribution of goods between the depot and customers. It is of economic importance to businesses as approximately 10-20% of the final cost of the goods is contributed by the transportation process. Approach: This problem was tackled using an Ant Colony Optimization (ACO) combined with heuristic approaches that act as the route improvement strategies. The proposed ACO utilized a pheromone evaporation procedure of standard ant algorithm in order to introduce an evaporation rate that depends on the solutions found by the artificial ants. Results: Computational experiments were conducted on benchmark data set and the results obtained from the proposed algorithms shown that the application of combination of two different heuristics in the ACO had the capability to improve the ants’ solutions better than ACO embedded with only one heuristic. Conclusion: ACO with swap and 3-opt heuristic has the capability to tackle the CVRP with satisfactory solution quality and run time. It is a viable alternative for solving the CVRP.

Journal ArticleDOI
TL;DR: Life-saving appl ications and thorough studies and tests should be conducted before WBANs can be widely applied to humans, particularly to address the challenges related to robust techniques for detection and classification to increase the accuracy and hence the confidence of applying such techniques without physician intervention.
Abstract: Problem statement: The Wireless Body Area Sensor Networks (WBASNs) is a wireless network used for communication among sensor nodes operating on or inside the human body in order to monitor vital body parameters and movements. This study surveys the state-of-the-art on Wireless Bo dy Area Networks, discussing the major components of research in this area including physiological sensin g and preprocessing, WBASNs communication techniques and data fusion for gathering data from sensors. In addition, data analysis and feedback will be presen ted including feature extraction, detection and classification of human related phenomena. Approach: Comparative studies of the technologies and techniques used in such systems will be provided in this study, using qualitative comparisons and use case analysis to give insight on potential uses for diff erent techniques. Results and Conclusion: Wireless Sensor Networks (WSNs) technologies are considered as one of the key of the research areas in computer scienc e and healthcare application industries. Sensor suppl y chain and communication technologies used within the system and power consumption therein, depend largely on the use case and the characteristics of the application. Authors conclude that Life-saving appl ications and thorough studies and tests should be conducted before WBANs can be widely applied to humans, particularly to address the challenges related to robust techniques for detection and classification to increase the accuracy and hence the confidence o f applying such techniques without physician intervention. Key word: Wireless body area sensor network, physiological s ensing, data preprocessing, wireless sensor communications, data fusion, classification algorithms, Chronic Disease (CD)

Journal ArticleDOI
TL;DR: Based on the experiments, the NARX model with five predictor variables is the best model compared to BPNN, and treatment of missing data using mean and OLR approach produced comparable results for this case study.
Abstract: The Department of Irrigation and Drainage (DID) Malaysia and Meteorological Malaysia Department (MMD) has been measured the flood characteristics benchmark which included water level, area inundation, peak inundation, peak discharge, volume of flow and duration of flooding. In terms of water levels, DID have introduced three categories of critical level stages namely normal, alert and danger levels. One of the rivers detected by DID that had reached danger level is Sungai Dungun located at Dungun district, Terengganu. The aim of this study is to find suitable prediction model of water level with input variables monthly rainfall, rate of evaporation, temperature and relative humidity taken from the same catchment at Dungun River using Neural Networks based Nonlinear Time Series Regression methods which are Backpropagation Neural Network (BPNN) and nonlinear autoregressive models with exogenous inputs (NARX) networks. The variables selection criteria procedures are also developed to select a significant explanatory variable. In addition, the process of pre-processing data such as treatment of missing data has been made on the original data collected by DID and MMD. The methods are compared to obtain the best model for prediction water level in Dungun River. Based on the experiments, the NARX model with five predictor variables is the best model compared to BPNN. In addition, treatment of missing data using mean and OLR approach produced comparable results for this case study.

Journal ArticleDOI
TL;DR: A simple and computationally effective histogram modification algorithm is presented for contrast enhancement in low illumination environment that makes it easy to implement and use in real time systems.
Abstract: Problem statement: Image enhancement improves an image appearance by increasing dominance of some features or by decreasing ambiguity between different regions of the image. Histogram based image enhancement technique is mainly based on equalizing the histogram of the image and increasing the dynamic range corresponding to the image. Approach: Histogram Equalization is widely used in different ways to perform contrast enhancement in images. As a result, such image creates side-effects such as washed out appearance and false contouring due to the significant change in brightness. In order to overcome these problems, mean brightness preserving Histogram Equalization based techniques have been proposed. Generally, these methods partition the histogram of the original image into sub histograms and then independently equalize each sub histogram with Histogram Equalization. Results: The comparison of recent histogram based techniques is presented for contrast enhancement in low illumination environment and the experiment results are collected using low light environment images. Conclusion: The histogram modification algorithm is simple and computationally effective that makes it easy to implement and use in real time systems.

Journal ArticleDOI
TL;DR: This study proposes to investigate the performance of multimodal biometrics using palm print and fingerprint and shows an average improvement of 8.52% compared to using palmprint technique alone.
Abstract: Problem statement: Biometric is a unique, measurable physiological or behavioral characteristic of a person and finds extensive applications in authentication and authorization. Fingerprint, palm print, iris, voice, are some of the most widely used biometric for personal identification. To reduce the error rates and enhance the usability of biometric system, multimodal biometric systems are used where more than one biometric characteristic are used. Approach: In this study it is proposed to investigate the performance of multimodal biometrics using palm print and fingerprint. Features are extracted using Discrete Cosine Transform (DCT) and attributes selected using Information Gain (IG). Results and Conclusion: The proposed technique shows an average improvement of 8.52% compared to using palmprint technique alone. The processing time does not increase for verification compared to palm print techniques.

Journal ArticleDOI
TL;DR: A fuzzy based secure data aggregation technique which performs clustering and cluster head election process, efficiently checks for malicious nodes based on the system parameters and maintains a secure aggregation process in the network.
Abstract: Problem statement: Secure data aggregation is a challenging task in wireless sensor network due to the facts like more complexity, greater overhead in the case of cryptographic techniques. These issues need to be overcome using efficient technique. Approach: We propose a fuzzy based secure data aggregation technique which was having 3 phases. In its first phase, it performs clustering and cluster head election process. In the second phase, within each clusters, power consumed, distance and trust values were calculated for each member. In the third phase, based on these parameters, fuzzy logic technique was used to select the secure and non-faulty node members for data aggregation. Finally, the aggregated data from the cluster heads was transmitted to the sink. Results: By simulation results we show that our technique had improved throughput and packet delivery ratio with reduced packet drop and less energy consumption. Conclusion: The proposed technique efficiently checks for malicious nodes based on the system parameters and maintains a secure aggregation process in the network.

Journal ArticleDOI
TL;DR: Voltage Stability Index Lmn can be useful for estimating the distance from the current operating point to voltage collapse point and the effectiveness of the analyzed methods is demonstrated through simulation studies in IEEE 14 bus reliability test systems.
Abstract: Problem statement: Estimating the margin in the loadability of the power system is essential in the real time voltage stability assessment. Voltage stability is currently one of the most important research areas in the field of electrical power system. In power system operation unpredictable events is termed as contingency and may be caused by line outage in the system which could lead to entire system instability. Voltage stability analysis and contingency analysis are would be performed in a power system by evaluating the derived voltage stability index. Approach: Voltage Stability Index Lmn can be useful for estimating the distance from the current operating point to voltage collapse point. The index can either reveal the critical bus of a power system or the stability of each line connected between two buses in an interconnected network or evaluate the voltage stability margins of a system. Results: Flexible Alternating Current Transmission Systems (FACTS) devices have been proposed as an effective solution for controlling power flow and regulating bus voltage in electrical power systems, resulting in an increased transfer capability, low system losses and improved stability. However to what extent the performance of FACTS devices can be brought out highly depends upon the location and the parameters of these devices. Unified Power Flow Controller (UPFC) is the most promising FACTS device for power flow control. Conclusion/Recommendations: The performance of this index is presented and the effectiveness of the analyzed methods is demonstrated through simulation studies in IEEE 14 bus reliability test systems.

Journal ArticleDOI
TL;DR: Geographical Division Traceback is proposed, a novel approach, for efficient IP traceback and DDoS defense methodology that possesses several advantageous features such as easy traversing to the attacker and improves the efficiency of tracing the attacker system.
Abstract: Problem statement: Distributed Denial of Service (DDoS) was a serious threat to the internet world that denies the legitimate users from being access the internet by blocking the service. Approach: In this study, we proposed a novel approach, Geographical Division Traceback (GDT) for efficient IP traceback and DDoS defense methodology. DDoS attack was one of the most serious and threatening issue in the modern world web because of its notorious harmfulness and it causes the delay in the availability of services to the intended users. Results: Unless like a traditional traceback methodology, GDT proposes a quick mechanism to identify the attacker with the help of single packet which imposes very less computational overhead on the routers and also victim can avoid receiving data from the same machine in future. This mechanism for IP Traceback utilizes the geographical information for finding out the machine which was responsible for making the delay was proposed. The IP packet consists of the subspaces details in which the path denotes. It helps to make sure whether the packet travels in the network and falls within any one of the subspaces. The division of subspaces leads to the source of attack system. Conclusion/Recommendations: This method possesses several advantageous features such as easy traversing to the attacker and improves the efficiency of tracing the attacker system.

Journal ArticleDOI
TL;DR: The Naive Bayes and Decision Tree Classifiers have comparatively better performance as a weak classifier with Adaboost, it should be considered for the building of IDS.
Abstract: Problem statement: Nowadays, the Internet plays an important role in communication between people. To ensure a secure communication between two parties, we need a security system to detect the attacks very effectively. Network intrus ion detection serves as a major system to work with other security system to protect the computer netwo rks. Approach: In this article, an Adaboost algorithm for network intrusion detection system wi th single weak classifier is proposed. The classifiers such as Bayes Net, Naive Bayes and Deci sion tree are used as weak classifiers. A benchmark data set is used in these experiments to demonstrate that boosting algorithm can greatly improve the classification accuracy of weak classif ication algorithms. Results: Our approach achieves a higher detection rate with low false alarm rates and is scalable for large data sets, resulting in a n effective intrusion detection system. Conclusion: The Naive Bayes and Decision Tree Classifiers have comparatively better performance as a weak classifi er with Adaboost, it should be considered for the building of IDS.

Journal ArticleDOI
TL;DR: NFC technology which consisting of three modes of operation and with international standardization can be applied as contactless to simplicity transactions, content delivery and information sharing on a mobile based platform.
Abstract: Problem statement: Near Field Communication (NFC) technology opens up exciting new usage scenarios for mobile devices based platform. Users of NFC-enabled devices can simply point or touch their devices to other NFC-enabled elements in the environment to communicate with them (‘contactless’), making application and data usage easy and convenient. Approach: The study describes the characteristics and advantages of NFC technology offers for the development of mobile airline ticketing. This scenario describes the potential to overcome the conventional systems that are not gated and use study tickets. In such a system, today a transport application can be loaded on a NFC-enabled phone. To study such a case, Yogyakarta International Airport was taken as an example for a discussion. Results: NFC technology which consisting of three modes of operation and with international standardization can be applied as contactless to simplicity transactions, content delivery and information sharing on a mobile based platform. Conclusion: The idea of NFC application for mobile airline ticketing has been discussed for Yogyakarta International Airport.

Journal ArticleDOI
TL;DR: This scheme tries to obtain and prove that the data stored in the cloud is not modified by the provider, thereby ensuring the integrity of data and secure computation and uses the Merkle hash tree for checking the correctness of computations done by the cloud service provider.
Abstract: Cloud computing is an emerging computing paradigm in which information technology resources and capacities are provided as services over the internet. The users can remotely store their data into the cloud so that the users can be relieved from the burden of local data storage and maintenance. The user does not have any control on the remotely located data. This unique feature possess many security challenges. One of the important concern is the integrity of data and computations. To ensure correctness of user’s data in the cloud, an effective scheme assuring the integrity of the data stored in the cloud is proposed. We try to obtain and prove that the data stored in the cloud is not modified by the provider, thereby ensuring the integrity of data. To ensure secure computation our scheme uses the Merkle hash tree for checking the correctness of computations done by the cloud service provider. Algorithms are implemented using java core concepts and java Remote Method Invocation (RMI) concepts for client-server communication by setting up the private cloud environment with eucalyptus tool. This method is used to assure data integrity and secured computations with reduced computational and storage overhead of the client.

Journal ArticleDOI
TL;DR: The results obtained show that TFRC or TCP should choose AODV as its routing protocol because it has less jitter which is one of the critical performance metri cs for multimedia applications.
Abstract: Problem statement: Although many efforts have been done on studying the behaviour of TCP in MANET, but the behaviour of TFRC remain unclear in MANET. The purpose of this research is two folds. First, we studied the behaviour of TF RC and TCP over AODV and DSR as the underlying routing protocols in terms of throughput, delay and jitter. The second objective was to identify wheth er MANET routing protocols have an impact on transport protocols or not. Approach: Network Simulator 2 (NS-2) was used to conduct all of the e xperiments, i.e., TFRC over AODV, TFRC over DSR, TCP over AODV and TCP over DSR. We created 30 nodes on a 1000◊1000 m location area and each node was assigned CBR traffic, transport proto col and routing protocol. In order to simulate the nodes mobility, we implemented a Random Waypoint mobility model with varying speeds of 5, 10, 15 and 20 m sec -1 (m/sec) with a 10 sec pause time. Results: We observed that TFRC throughput increases almost 55% when using DSR as its routing protocol, but TCP throughput has no significant difference with different underlying protocols. How ever, in terms of jitter and delay, both routing protocols, i.e., AODV and DSR have the impact of more than 50% on TFRC and TCP. Conclusion/Recommendations: The results obtained also show us that TFRC or TCP should choose AODV as its routing protocol because it has less ji tter which is one of the critical performance metri cs for multimedia applications.

Journal ArticleDOI
TL;DR: This study addresses the problems of clustering a dynamic dataset in which the data set is increasing in size over time by adding more and more data.
Abstract: Problem statement: Clustering and visualizing high-dimensional dynami c data is a challenging problem. Most of the existing clusterin g algorithms are based on the static statistical relationship among data. Dynamic clustering is a me chanism to adopt and discover clusters in real time environments. There are many applications such as i ncremental data mining in data warehousing applications, sensor network, which relies on dynam ic data clustering algorithms. Approach: In this work, we present a density based dynamic data clust ering algorithm for clustering incremental dataset and compare its performance with full run of normal DBSCAN, Chameleon on the dynamic dataset. Most of the clustering algorithms perform well and will give ideal performance with good accuracy measured with clustering accuracy, which is calcula ted using the original class labels and the calculated class labels. However, if we measure the performance with a cluster validation metric, then it will give another kind of result. Results: This study addresses the problems of clustering a dynamic dataset in which the data set is increasing in size over time by adding more and more data. So to evaluate the performance of the algorithms, we used Generalized Dunn Index (GDI), Davies-Bouldin index (DB) as the cluster validation metric and as well as time taken for clustering. Conclusion: In this study, we have successfully implemented and evaluated the proposed density based dynamic clustering algorithm. The performance of the algorithm was compared with Chameleon and DBSCAN clustering algorithms. The proposed algorithm performed significantly well in terms of clustering accuracy as well as speed.

Journal ArticleDOI
TL;DR: The proposed TP Scheduling (Transpotation Problem based) responded for various tasks assigned by clients in poisson arrival pattern and achieved the improved reliability in dynamic decentralized cloud environment.
Abstract: Problem statement: Cloud is purely a dynamic environment and the existing task scheduling algorithms are mostly static and considered various parameters like time, cost, make span, speed, scalability, throughput, resource utilization, scheduling success rate and so on. Available scheduling algorithms are mostly heuristic in nature and more complex, time consuming and does not consider reliability and availability of the cloud computing environment. Therefore there is a need to implement a scheduling algorithm that can improve the availability and reliability in cloud environment. Approach: We propose a new algorithm using modified linear programming problem transportation based task scheduling and resource allocation for decentralized dynamic cloud computing. The Main objective is to improve the reliability of cloud computing environment by considering the resources available and it’s working status of each Cluster periodically and maximizes the profit for the cloud providers by minimizing the total cost for scheduling, allocation and execution cost and minimizing total turn-around, total waiting time and total execution time. Our proposed algorithm also utilizes task historical values such as past success rate, failure rate of task in each Cluster and previous execution time and total cost for various Clusters for each task from Task Info Container (TFC) for tasks scheduling resource allocation for near future. Results: Our approach TP Scheduling (Transpotation Problem based) responded for various tasks assigned by clients in poisson arrival pattern and achieved the improved reliability in dynamic decentralized cloud environment. Conclusion: With our proposed TP Scheduling algorithn we improve the Reliability of the decentralized dynamic cloud computing.

Journal ArticleDOI
TL;DR: This study compared the performance of the two of the popul ar secret key encryption algorithms, Blowfish and Skipjack, and concluded that the Blowfish is the best performing algorithm for implementation.
Abstract: Problem statement: The main goal guiding the design of any encryption algorithm needs to be secured against unauthorized attacks. For all ap plied applications, performance and the cost of implementations are also important concerns. A data encryption algorithm would not be of much use if it is secure enough but slow in performance because it is a common repetition to embed encryption algorithms in other applications such as e-commerce , banking and online transaction processing applications. Inserting of encryption algorithms in other applications also prevents a hardware implementation and is thus a major cause of tainted overall performance of the system. Approach: In this study, the performance of the two of the popul ar secret key encryption algorithms (Blowfish and Skipjack) was compared. Results: Blowfish and Skipjack, had been implemented and their performance was compared by encrypting input files of varying contents and sizes. The algorithms had been implemented in a uniform language C#, using their standard specifications to allow a fair comparison of execution speeds. Conclusion: The performance results have been summarized and a conclusion has been presented. Based on the experiments, we can conclude that the Blowfish is the best performing algorithm for implementation.