scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Computer Trends and Technology in 2016"


Journal ArticleDOI
TL;DR: The result concluded that organizations no longer neglect unstructured data nowadays; rather they are devising means of analyzing it to extract information.
Abstract: With the emergence of new channels and technologies such as social networking, mobile computing, and online advertising, the data generated no longer have a standard format or structure like the conventional ones and cannot be processed using relational models. They come in the form of text, XML, emails, images, weblogs, videos, and so on resulting in a surge of new data types. This formless data is either semi-structured or unstructured data and makes searching and analysis complex. This paper gave an overview of this unstructured data that makes the backbone of predictive analysis. It outlined the sources and element of unstructured data and how organization benefits from gathering, analyzing and using unstructured data. The result concluded that organizations no longer neglect unstructured data nowadays; rather they are devising means of analyzing it to extract information.

41 citations



Journal ArticleDOI
TL;DR: It is agreed that the majority of government information leaks occur on networks, making information leakage control critical in government network design, and requires more advanced and secure e-Government networks to protect data from growing security threats and risks.
Abstract: The growth and rapid adoption of the Internet has greatly changed how all organizations deal with their respective stakeholders. As the move from administrative operations to service operations accelerates, e- government Network Platform is a solution to transform the way they do business and services. As well know, the E-government is a website that provides reliable content based on a strong infrastructure of a digital network, application servers and internet, an extensive database and other supporting services. It requires more advanced and secure e-Government networks to protect data from growing security threats and risks. Threats include unauthorized access to resources, malicious damage, and data intercepts. Security risks include virus, cyber-attacks, and key information leakages. Experts agree that the majority of government information leaks occur on networks, making information leakage control critical in government network design.

26 citations


Journal ArticleDOI
TL;DR: The paper presents most relevant work in the area of EDM in higher education it covers course management systems, student behaviors, decision support system, and student retention and attrition.
Abstract: Educational data mining (EDM) is a broader term that focuses on analyzing, exploring, predicting, clustering, and classification of data in educational institutions. EDM grows faster and covers many interdisciplinary such as education, e- learning, data mining, data analysis, intelligent system etc... The paper presents most relevant work in the area of EDM in higher education it covers course management systems, student behaviors, decision support system, and student retention and attrition. The paper also provide a comparison study between some of research work in such areas. Because of the growth in the interdisciplinary nature of EDM the paper, also try to provide boundary scope and definitions for EDM. Keywords Data Mining , DM, Educational Data Mining, EDM, Knowledge Discovery, KDD, Decision Support System, DSS, Course Management Systems, CMS. I. EDUCATIONAL DATA MINING DEFININTION

16 citations


Journal ArticleDOI
TL;DR: Two features of multi-agent learning which establish its study as a separate field from ordinary machine learning are shown.
Abstract: The output of the system is a sequence of actions in some applications. There is no such measure as the best action in any in-between state; an action is excellent if it is part of a good policy. A single action is not important; the policy is important that is the sequence of correct actions to reach the goal. In such a case, machine learning program should be able to assess the goodness of policies and learn from past good action sequences to be able to generate a policy. A multi-agent environment is one in which there is more than one agent, where they interact with one another, and further, where there are restrictions on that environment such that agents may not at any given time know everything about the world that other agents know. Two features of multi-agent learning which establish its study as a separate field from ordinary machine learning. Parallelism, scalability, simpler construction and cost effectiveness are main characteristics of multi-agent systems. Multiagent learning model is given in this paper. Two multiagent learning algorithms i. e. Strategy Sharing & Joint Rewards algorithm are implemented. In Strategy Sharing algorithm simple averaging of Q tables is taken. Each Q-learning agent learns from all of its teammates by taking the average of Qtables. Joint reward learning algorithm combines the Q learning with the idea of joint rewards. Paper shows result and performance comparison of the two multiagent learning algorithms.

13 citations


Journal ArticleDOI
TL;DR: The proposed Genetic Algorithm based Firefly Algorithm is used to optimize the weights between layers and biases of the neuron network in order to minimize the fitness function which is defined as the mean squared error.
Abstract: Feed-forward neural networks are popular classification tools which are broadly used for early detection and diagnosis of breast cancer. In recent years, a great attention has been paid to bio-inspired optimization techniques due to its robustness, simplicity and efficiency in solving complex optimization problems. In this paper, it is intended to introduce a Genetic Algorithm based Firefly Algorithm for training neural networks. The proposed algorithm is used to optimize the weights between layers and biases of the neuron network in order to minimize the fitness function which is defined as the mean squared error. The simulation results indicate that better performance of the Firefly Algorithm in optimizing weights and biases is obtained when being hybridized with Genetic Algorithm. The proposed algorithm has been tested on Wisconsin Breast Cancer Dataset in order to evaluate its performance and the efficiency and effectiveness of the proposed algorithm by comparing its results with the existing methods. The results of the proposed algorithm were compared with that of the other techniques Firefly Algorithm, Biogeography Based Optimization, Particle Swarm Optimization and Ant Colony Optimization. It was found that the proposed Genetic Algorithm based Firefly Algorithm approach was capable of achieving the lowest mean squared error of 0.0014 compared to other algorithms as mean squared error values for other algorithms were 0.002 for Firefly Algorithm, 0.003 for Biogeography Based Optimization, 0.0135 for Ant Colony Optimization , 0.035 for Particle Swarm Optimization.

12 citations


Journal ArticleDOI
TL;DR: This paper provides an overview of the security metrics and its definition, standards, advantages, types, problems, taxonomies, risk assessment methods and also classifies the security metric and explains its risks.
Abstract: measuring information security is difficult; it is difficult to have one metrics that covers all types of devices. Security metrics is a standard used for measuring any organization’s security. Good metrics are needed for analysts to answer many security related questions. Effective measurement and reporting are required to improve effectiveness and efficiency of controls, and ensure strategic alignment in an objective, reliable, and efficient manner. This paper provides an overview of the security metrics and its definition, standards, advantages, types, problems, taxonomies, risk assessment methods and also classifies the security metrics and explains its risks. Keywords— Security, Metrics, advantages, Problems, Risk management.

11 citations


Journal ArticleDOI
TL;DR: If the mobile phone can listen to the user for the request to handle the daily affairs, then give the right response, it will be convenient for users to communicate with their phone, and theMobile phone will be much smarter as a human assistant.
Abstract: This paper is concentratingon the Android development based on the voice control (recognition, generate and analyse corresponding commands, intelligent responses automatically). The typical way of communication used by people in day to day life is by the speech. If the mobile phone can listen to the user for the request to handle the daily affairs, then give the right response, it will be convenient for users to communicate with their phone, and the mobile phone will be much smarter as a human assistant. The application includes Prediction functionality which will make recommendations based on the user behaviour. Keywords-Intelligent System, Data Mining, Voice Assistant, Android.

10 citations


Journal ArticleDOI
TL;DR: The proposed system extracts the source code features, URL features and image features from the phishing website and gives them to the ant colony optimization algorithm to acquire the reduced features.
Abstract: Phishing attack is an aberrant trick to peculate user’s private information by duping them to assail via a spurious website planned to mimic and resembles as an authentic website. The user’s confidential information such as username, password, and PIN number will be grabbed by the attacker and creates a fraudulent transactions. The information holder’s credentials as well as money will be seized. The phishing and legitimate website will have high intelligible resemblances by which the attacker will seize the credentials of the user. Inorder to detect the phishing attacks there exists various techniques such as blacklisting, whitelisting, heuristics and machine learning. Nowadays machine learning is used and found to be more effective. The proposed system extracts the source code features, URL features and image features from the phishing website. The features that are extracted are given to the ant colony optimization algorithm to acquire the reduced features. The reduced features are again given to the Naïve Bayes classifier inorder to classify the webpage as genuine or phished.

9 citations


Journal ArticleDOI
TL;DR: This work scaled down the capacitance from 512pF to 32pF at various fixed frequency and implemented on 28 nm Artix7 FPGA with I/O Power & Leakage Power.
Abstract: Reducing the power consumption is the main concern in green computing. So here we used capacitance scaling technique on comparator for optimizing the power. We worked with I/O Power & Leakage Power because Clock Power & Signal Power are independent of capacitance scaling. In our work we have scaled down the capacitance from 512pF to 32pF at various fixed frequency. At 1GHz when we scale down the capacitance from 512pF to 32pF then we got 91.26% reduction in total I/O power dissipation. At 10 GHz when we scale down the capacitance from 512pF to 32pF then we got 91.36% reduction in total I/O power dissipation. At 20 GHz when we scale down the capacitance from 512pF to 32pF then we got 91.364% reduction in total I/O power dissipation. At 30 GHz when we scale down the capacitance from 512pF to 32pF then we got 91.3624% reduction in total I/O power dissipation. At 40GHz when we scale down the capacitance from 512pF to 32pF then we got 91.36277% reduction in total I/O power dissipation. This design is implemented on 28 nm Artix7 FPGA.

8 citations


Journal ArticleDOI
TL;DR: Method used is for the recognition of the characters from the license number plate and is based on template-matching and is done on the basis of correlation between segmented characters and the templates in the database.
Abstract: In this paper, recognition of characters written on a vehicle license number plate is proposed. Method used that is for the recognition of the characters from the license number plate and is based on template-matching. In this method , first the image of a car license number plate is taken as input, then pre-processing steps such as conversion to Gray-scale image, dilation, erosion, convolution is done to remove noise from the input image. Then each character in the number plate is segmented. Segmentation is done on the basis of connected components. Then after segmentation, recognition of characters is done by matching templates to the segmented characters. Matching is done on the basis of correlation between segmented characters and the templates in the database. In the last step, a text file shows the recognised number and the character from the input image. Simulation of the project is done in MATLAB.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the application of opinion mining as an approach to extract software quality properties and found that the major issues of software reviews mining using sentiment analysis are due to software lifecycle and the diverse users and teams.
Abstract: Software review text fragments have considerably valuable information about users experience. It includes a huge set of properties including the software quality. Opinion mining or sentiment analysis is concerned with analyzing textual user judgments. The application of sentiment analysis on software reviews can find a quantitative value that represents software quality. Although many software quality methods are proposed they are considered difficult to customize and many of them are limited. This article investigates the application of opinion mining as an approach to extract software quality properties. We found that the major issues of software reviews mining using sentiment analysis are due to software lifecycle and the diverse users and teams.

Journal ArticleDOI
TL;DR: OTSU’s method and new iterative triclass thresholding technique of image segmentation are compared, which is used by doctors in evaluating medical images or in recognizing abnormalities in a medical image.
Abstract: Medical image segmentation is related to the segmentation of known anatomic structures from medical images. Structures consists of organs or parts such as cardiac ventricles or kidneys, abnormalities such as tumors and cysts, as well as other structures such as vessels, brain structures etc. The complete objective of this segmentation is known as computer-aided diagnosis which is used by doctors in evaluating medical images or in recognizing abnormalities in a medical image. Segmentation means the process of partitioning a digital image into multiple regions (sets of pixels). The methods of segmentation is used to simplify and change the representation of an image into something that is more meaningful and easy to understand. The result of image segmentation is a set of regions that combine the whole image, or a set of contours extracted from the image. Each of the pixels in a region is same with respect to some characteristic or computed things, such as color, concentration, or texture. Adjacent regions are not similar with each other they differs in some characteristics. A rugged segmentation procedure brings the process a long way towards successful solution of an image difficulty. Outcome of the segmentation stage is raw pixel data, consisting of both the boundary of a region and all the points in the region. In this paper, we compared two methods of image segmentation OTSU’s method and new iterative triclass thresholding technique of image segmentation.

Journal ArticleDOI
TL;DR: Fractional Fourier transform (FrFT) watermarking algorithm is used for implanting a sequence or an image as digital watermark on one of the stereo pair images and then superimpose on the other image to build a 3D anaglyph watermarked image.
Abstract: In recent times, the movement of 3D movies and 3D projection took enormous attention. Anaglyph images are elementarily obtained by overlapping left and right eye images in different color planes of a single image for successive viewing through colored glasses. Digital watermarking is a desirable tool for copyright protection, secret communication, detection of illegal duplication and alteration and content authentication. Here Fractional Fourier transform (FrFT) watermarking algorithm is used for implanting a sequence or an image as digital watermark on one of the stereo pair images and then superimpose on the other image to build a 3D anaglyph watermarked image. In the counter process, De-anaglyph is used to detach the two stereoscopic images from which watermark is derived. Inserting watermark either in the right image or left image fetches more protection compared to embedding watermark straight into an anaglyph image. Keywords— Watermarking; Anaglyph 3D; FrFT; Secret Communication; Stereoscopy

Journal ArticleDOI
TL;DR: The main objective of image segmentation algorithms is to preserve the features of an image with improved efficiency and reduced computational time.
Abstract: The applications in image processing like image recognition or compression, the process cannot be done directly due to its inefficiency and practical problems. Hence, some of image segmentation algorithms were introduced to segment an image. Image segmentation is a process of splitting or partitioning an image into multiple numbers of segments that is pixels otherwise known as superpixels. The splitting up of an image into meaningful object is with respect to the similar characteristics like color, intensity, texture etc. Till now various number of image segmentation algorithms were proposed and were applied in our day-to-day life. In general, image segmentation algorithms can be categorized into region-based segmentation, edge-based segmentation, feature based clustering segmentation, threshold based segmentation, graph based segmentation and model based segmentation. The main objective of image segmentation algorithms is to preserve the features of an image with improved efficiency and reduced computational time. We analyze some of the segmentation methodologies that aim at giving better efficiency.

Journal ArticleDOI
TL;DR: This paper retrieved information with the help of Jaccard similarity coefficient and analysis that information and Genetic Algorithm gives optimal result of the search.
Abstract: Similarity measure define similarity between two or more documents. The retrieved documents are ranked based on the similarity of content of document to the user query. Jaccard similarity coefficient measure the degree of similarity between the retrieved documents. In this paper we retrieved information with the help of Jaccard similarity coefficient and analysis that information. All this is performed with the help of Genetic Algorithm. Due to exploring and exploiting nature of Genetic Algorithm it gives optimal result of our search. Genetic algorithm use Jaccard similarity coefficient to calculate similarity between documents. Value of jaccard similarity function lies between 0 &1 .it show the probability of similarity between the documents.

Journal ArticleDOI
TL;DR: A comparative review on different classifiers used for prediction of attack risks on environment having network and the three best or efficient classifiers have been evaluated by three different authors as mentioned in this paper.
Abstract: This paper is having a comparative review on different classifiers used for prediction of attack risks on environment having network. In total there are 19 classifiers explained in this paper and the three best or efficient classifiers have been evaluated by three different authors as mentioned in this paper. The data of those three authors has been used in this paper for doing comparison between different classification algorithms. Comparison are taken on the fields of TP-Rate, FP-Rate, Precision, Recall, F-measure etc. Anlaysis was done by those mentioned authors on WEKA tool . Keywords—Classification Algorithms; Intrusion Detection System; Meta Classifier; Decision Trees; Machine Learning; Data Mining; WEKA

Journal ArticleDOI
TL;DR: A comprehensive overview for MRI brain tumor segmentation methods is presented to presents a comprehensive overview of various segmentation techniques.
Abstract: Segmentation of brain tumor is a very important and crucial step in the initial detection of tumor in the Medical Image Analysis. Though various methods are present for brain tumor segmentation, but detection of tumor still is a challenging task since for researchers as tumor possesses complex characteristics in appearance and boundaries. Brain tumor segmentation must be done with precision in the clinical practices. The objective of this review paper is to presents a comprehensive overview for MRI brain tumor segmentation methods. In this paper, various segmentation techniques have been discussed. Comparative analysis among these various segmentation conventions has been discussed in


Journal ArticleDOI
TL;DR: This papers aims at proposing a recommender system that uses hybrid approach using improved K-means clustering with Spearman’s rank correlation similarity to reduce the RMSE and time complexity and results are compared with basic K- MEANS clustering.
Abstract: In this age of information load, it become a herculean task for user to get the relevant information. Recommender system plays an important role in suggesting relevant information that is likely to be preferred by the user. Different type of clustering is used for recommender system like K-means, fuzzy C-mean, chameleon hierarchical etc. This papers aims at proposing a recommender system that uses hybrid approach using improved K-means clustering with Spearman’s rank correlation similarity to reduce the RMSE and time complexity and results are compared with basic K-means clustering. KeywordsRecommender System, Hybrid Recommender, clustering, k-means, similarity, RMSE, Spearman's rank correlation



Journal ArticleDOI
TL;DR: The main benefit of using PLL technique in Frequency Synthesizer is that it can generate frequencies of 100200MHz comparable to the accuracy of a crystal oscillator.
Abstract: A new architecture and simulation of an integer n frequency synthesizer using PLL for RF application has been illustrated in this paper. This design consists of low power phase frequency detector, low jitter charge pump, ring oscillator based VCO, passive loop filter and 8 bit frequency divider using 250nm technology. This presents the simplest way to design and simulate integer n frequency synthesizer and lock the PLL. The design and analysis of PLL is done on simulation EDA TANNER TOOL 13.0.The main benefit of using PLL technique in Frequency Synthesizer is that it can generate frequencies of 100200MHz comparable to the accuracy of a crystal oscillator. This paper gives a brief introduction to the basics of Phase Locked loops.

Journal ArticleDOI
TL;DR: In this paper, the authors evaluated the impact of using LMS on teaching and learning based on interviews with ten (10) students and lecturers of Mbarara University of Science and Technology (MUST).
Abstract: The wider adoption of Information Communication Technologies (ICTs) has provided opportunities to utilize electronic teaching and learning environments (e-learning) such as Learning Management System (LMS) in Higher Institutions of learning. However, evaluating LMSs especially in developing countries has largely been focusing on acceptance issues, while giving less attention on impact evaluation. This study qualitatively assessed the impact of using LMS on teaching and learning based on interviews with ten (10) students and en (10) lecturers of Mbarara University of Science and Technology (MUST). In May 2016, participants took part in individual interviews that mainly focused on their experiences and benefits of using LMS at MUST. The digitally recorded interviews were transcribed, and analyzed using thematic analysis. Reported impacts of LMS include: 1). improved engagement and interactions using discussion boards and chatting forums as LMS tools; 2). offering unlimited accessibility of teaching and learning materials; 3) being cost-effective; and 4) improved management of teaching and learning resources. Overall LMS is useful for offering enriched teaching and learning experiences.

Journal ArticleDOI
TL;DR: An efficient server level approach is proposed to identify victim IP accurately and responsively by using unusual request count, and once the victim IP is confirmed, the approach is then to use HOP count i.e. number of router packets passes to reach destination, to filter out the entire illegitimate request.
Abstract: Distributed Denial of Service (DDoS) attacks pose one of the most serious security threats to the Internet. In this work, we aimed to develop a collaborative defence framework against DNS based DDoS reflection and amplification attacks in networks. We focus on two main phases, which are victim detection and filtering of malicious traffic, to achieve a successful defence against DNS reflection attack and prevention against amplification attack. We propose an efficient server level approach to identify victim IP accurately and responsively by using unusual request count. Once the victim IP is confirmed, our approach is then to use HOP count i.e. number of router packets passes to reach destination, to filter out the entire illegitimate request.

Journal ArticleDOI
TL;DR: In this paper, a mobile-device based application called "mlcc" was proposed to simultaneously capture and process a 2D color image of rice leaf, thus eliminating the expensive external components, reducing the human color perception and results in achieving high color accuracy.
Abstract: The color of leaf corresponds to nitrogen deficiency status of that particular crop, farmers compares color of leaf with Leaf Color Chart (LCC) in order to estimate the need of nitrogen fertilizer of their crop. However the ability to compare leaf color with the LCC varies from person to person that affects the accuracy of final result. This paper proposes a mobile-device based application called \"mlcc\". Main idea is to simultaneously capture and process a 2-D color image of rice leaf, thus eliminating the expensive external components, reducing the human color perception and results in achieving high color accuracy. This android-based application can be correctly identified all the important 6 green color levels of rice leaf. Keywords— Image processing, Leaf color chart, Android studio, digital camera, rice field.

Journal ArticleDOI
TL;DR: Simulation results demonstrate that compared to other energy efficient routing, PLEER is an efficient method for reducing the average energy per packet and end to end delay in MANET.
Abstract: Mobile Ad hoc NETwork (MANET) plays a major role in providing effective communication services for infrastructure less application through efficient routing. For this reason, many energy efficient routing algorithms are being developed as a promising solution. However, while developing efficient routing, redundant rebroadcasting poses significant problems. Therefore, effective routing without any redundant rebroadcasting is inherently necessary whenever a user transmits the packet on the channel. In this work to attain effective routing mechanism in Mobile Ad hoc NETwork without any redundant rebroadcasting, a method called, Probabilistic and Link based Energy Efficient Routing (PLEER) is presented. The Probabilistic Rebroadcasting mechanism in PLEER method reduces high channel contention causing redundant and average energy per packet by combining both neighbor coverage and probabilistic methods. The PLEER method therefore selects the best path in a network while transmitting the packet reducing the average energy per packet and end to end delay. Link based Energy Efficient Routing provides less redundant rebroadcast by means of avoiding network collision and contention which in turn increase the packet delivery ratio and network lifetime. Simulation results demonstrate that compared to other energy efficient routing, PLEER is an efficient method for reducing the average energy per packet and end to end delay in MANET. Extensive simulations show that PLEER outperforms other existing scheme in terms of successful data delivery and improve network lifetime in various scenarios.

Journal ArticleDOI
TL;DR: It has been observed that use of VTLN can effectively improve the robustness of the English vowel phoneme recognizer in both noise free and noisy conditions.
Abstract: Differences in human vocal tract lengths can cause inter speaker acoustic variability in speech signals spoken by different speakers for the same textual version and due to these variations, the robustness of a speaker independent (SI) speech recognition system is affected. Speaker normalization using vocal tract length normalization (VTLN) is an effective approach to reduce the affect of these types of variability from speech signals. In this paper impact of VTLN approach has been investigated on the speech recognition performance of an English vowel phoneme recognizer with both noise free and noisy speech signals spoken by children. Pattern recognition approach based on Hidden Markov Model (HMM) has been used to develop the English vowel phoneme recognizer. Here training phase of the automatic speech recognition (ASR) system has been performed with speech signals spoken by adult male and female speakers and testing phase is performed by the children speech signals. In this investigation, it has been observed that use of VTLN can effectively improve the robustness of the English vowel phoneme recognizer in both noise free and noisy conditions.

Journal ArticleDOI
TL;DR: The various schemes of segmentation based visual cryptography are described and clear state that no proper method / schemes / techniques are used for proper type of images.
Abstract: Region growing is a simple region-based image segmentation method. It is also classified as a pixel-based image segmentation method since it involves the selection of initial seed points of images. It also groups the pixels in whole image into sub regions (i.e. set to sub sets). This paper describes the various schemes of segmentation based visual cryptography and clear state that no proper method / schemes / techniques are used for proper type of images. In this paper, we have analyzed the region growing based image segmentation and the seeded growing area , but the quality of image is totally depend upon the way of selecting the seed i.e. automatically and manual way. As the seeded region growing techniques is gaining more popularity in practical day by day especially in medical images. Keywords— Image segmentation, region growing, security, seeded growing region, thresholding, fuzzy clustering.

Journal ArticleDOI
TL;DR: This paper reviews the basic commit protocols and the other protocols depend on it, for enhancing the transaction performance in DRTDBS and proposes a new commit protocol for reducing the number of transaction that missing their deadline.
Abstract: The commit processing in a Distributed Real Time Database (DRTDBS) can significantly increase execution time of a transaction. Therefore, designing a good commit protocol is important for the DRTDBS; the main challenge is the adaptation of standard commit protocol into the real time database system and so, decreasing the number of missed transaction in the systems. In these papers we review the basic commit protocols and the other protocols depend on it, for enhancing the transaction performance in DRTDBS. We propose a new commit protocol for reducing the number of transaction that missing their deadline.