scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Advanced Research in Computer Science in 2017"


Journal ArticleDOI
TL;DR: A brief study of WannaCry ransomware, its effect on computer world and its preventive measures to control ransomware on computer system are given.
Abstract: Recently Ransomware virus software spread like a cyclone winds. A cyclone wind creates atmospheric instability; likewise ransomware creates computer data instability. Every user is moving towards digitization. User keep data secure in his or her computer. But what if data is hijacked. A ransomware is one of the software virus that hijack users data. A ransomware may lock the system in a way which is not for a knowledgeable person to reverse.It not only targets home computersbut business also gets affected. It encrypts data in such a way that normal person can no longer decrypt. A person has to pay ransom to decrypt it. But it does not generate that files will be released.This paper gives a brief study of WannaCry ransomware, its effect on computer world and its preventive measures to control ransomwareon computer system.

190 citations


Journal ArticleDOI
TL;DR: Internet of Things oriented comparison of various boards with suitable selection of the hardware development platforms that are capable enough to improve the understanding technology, and methodology to facilitate developer’s requirements is provided.
Abstract: Internet of Things is a striding platform where every day devices are transformed into an automated informative system with intelligent means of communicating protocols. The available development boards for deploying elementary Internet of Things systems and programming them pave conformation of related fields. The lack of overall functional knowledge and the capabilities of the available means of development boards is presently resisting the engineers to get through the scope of Internet of Things centric approaches. This paper provides Internet of Things oriented comparison of various boards with suitable selection of the hardware development platforms that are capable enough to improve the understanding technology, and methodology to facilitate developer’s requirements. This paper also summarizes various capabilities of available hardware development platforms for IoT and provides a method to solve real-life problems by building and deployment of powerful Internet of Things notions.

41 citations


Journal ArticleDOI
TL;DR: The various image segmentation techniques are reviewed and discussed in this paper and it is shown how these techniques can be used for image compression or object recognition applications.
Abstract: Due to the revolution of computer technology image-processing techniques has become very important in a wide variety of Applications. The Image segmentation is most important process of image processing. Image segmentation is the technique of dividing an image into small parts, that parts are called as segments. It is very useful for image compression or object recognition applications, because for these types of applications, it is inefficient to process the whole image. There arevarious image segmentation techniques thathelps in image segmentation based on certain image features like pixel intensity values, color, textures, etc. The various image segmentation techniques are reviewed anddiscussed in this paper.

40 citations



Journal ArticleDOI
TL;DR: This system automates the greenhouse maintenance operations and monitor the growth conditions inside the greenhouse closely and is focused on solving particular problems of climate change and shortage of good quality of data.
Abstract: In recent scenario of climate change and its effect on the environment has motivated the farmers to install greenhouses in their fields. But maintaining a greenhouse and its plantation is very labour intensive and majority of them perform vital operations intuitively. Also agricultural researchers are facing shortage of good quality of data which is crucial for crop development. Thus we have developed such a cost effective system using Internet of Things (IoT) technology which is focused on solving these particular problems, our system automates the greenhouse maintenance operations and monitor the growth conditions inside the greenhouse closely.

26 citations


Journal ArticleDOI
TL;DR: The main objective of this study is to examine the existing literature on various approaches for Intrusion Detection in particular Anomaly Detection, to examine their conceptual foundations, to taxonomize the Intrusions Detection System (IDS) and to develop a morphological framework for IDS for easy understanding.
Abstract: Given the exponential growth of Internet and increased availability of bandwidth, Intrusion Detection has become the critical component of Information Security and the importance of secure networks has tremendously increased. Though the concept of Intrusion Detection was introduced by James Anderson J. P. in the year 1980, it has gained lots of importance in the recent years because of the recent attacks on the IT infrastructure. The main objective of this study is to examine the existing literature on various approaches for Intrusion Detection in particular Anomaly Detection, to examine their conceptual foundations, to taxonomize the Intrusion Detection System (IDS) and to develop a morphological framework for IDS for easy understanding. In this study a detailed survey of IDS from the initial days, the development of IDS, architectures, components are presented.

25 citations


Journal ArticleDOI
TL;DR: The principal aim and contribution of this review paper is to provide the overview of gadgetGetting to know and presents system-getting to know techniques and critiques the deserves and demerits of numerous device getting to know algorithms in unique approaches.
Abstract: Machine getting to know is the essence of synthetic intelligence. Machine Learning learns from beyond studies to improve the performances of wise applications. Machine studying; machine builds the mastering version that efficiently “learns” a way to estimate from education facts of given example. IT refers to a set of topics managing the advent and evaluation of algorithms that facilitate pattern popularity, type, and prediction, based on fashions derived from existing information. In this new technology, Machine learning is in general in use to illustrate the promise of producing always accurate estimates. The principal aim and contribution of this review paper is to provide the overview of gadget getting to know and presents system-getting to know techniques. Also paper critiques the deserves and demerits of numerous device getting to know algorithms in unique approaches. Keywords: machine learning; supervised learning; unsupervised learning; semi-supervised learning; reinforcement learning

25 citations


Journal ArticleDOI
TL;DR: In this article, a survey of application layer protocols in Internet of Things (IoT) is presented, which includes CoAP, MQTT, AMQT, XMPP and RESTFUL.
Abstract: Internet of Things (IoT) or Web of Things (WoT) is emerging technology and it wireless network between two or more smart objects or smart things connect via Internet. IoT classified in two type first is inside of IoT and second side is outside of IoT. In inside of IoT consider as protocols in IoT. In outside of IoT consider as sensor, actuators, etc..., those are physically possible. In inside of IoT consider as Protocols and IoT have their own protocol stack. Protocol stack have different layer like Application layer, Transport layer, Internet layer and Physical/Link layer. The judgmental role goal of IoT is to ensure effectual communication between two objects and build a sustained bond among them using different application. The application layer responsible for providing services and determining a set of protocol for message passing at the application layer. This survey understand application layer protocol like CoAP, MQTT, AMQT, XMPP and RESTFUL. Also describe some of the new protocols in application layer protocol.Which type of architecture (like request/response, client/server and publish/subscribe) and security (like DTLS, TCL/SSL and HTTPS) support in those protocols. Keyword: Internet of Things (IoT), Application layer protocols, CoAP, MQTT, AMQT, RESTFUL, Web-socket.

23 citations


Journal ArticleDOI
TL;DR: The performance evaluation of the deep learning model is done on the UCSD Data Mining Contest2009 data and it is found that deep learning models give a high accuracy to detect the fraudulent transactions.
Abstract: the growth of ecommerce websites people and financial companies rely on online services to carry out their transactions that has led to an exponential increase in the credit card frauds.Fraudulent credit card transactions lead to a loss of huge amount of money. The design of an efficient fraud detection system is necessary in order to reduce the losses incurred by the customers and financial companies. Research has been done on many models and methods to prevent and detect credit card frauds. Fraudsters masquerade the normal behaviour of customers and the fraud patterns are changing rapidly so the fraud detection system needs to constantly learn and update.Deep Learning has been used in many fields like in speech recognition, image recognition and natural language processing. This paper aims to understand how Deep Learning can be helpful in detecting credit card frauds. Deep learning package H2O is used in this paper to train a deep learning model. H2O deep learning framework is an efficient framework to handle large datasets and perform deep learning. The performance evaluation of the deep learning model is done on the UCSD Data Mining Contest2009 data and it is found that deep learning models give a high accuracy to detect the fraudulent transactions.

23 citations


Journal ArticleDOI
TL;DR: A comparative evaluation of Naive Bayes, Logistic Regression, Decision tree and Random Forest in the context of Pima Indian Diabetes Dataset in order to predict the diabetic patients is explored.
Abstract: In Data mining classification is one the most important technique. Today, we have data in abundant from numerous sources but in order to get meaningful information from it is very tedious task. Machine learning algorithms to train classifiers to decode the meaningful information from the data, this analysis approach has gained much popularity in recent years. This paper explores evaluation performance of Naive Bayes, Logistic Regression and Decision tree, Random forest using datasets (Pima Indian Diabetes data from UCI Repository). Naive Bayes algorithm is depending upon likelihood and probability; it is fast and stable to data changes. Logistic Regression, calculate the relationship of each feature and weights them based on their impact on result. Random forest algorithm is an ensemble algorithm, fits multiple trees with subset of data and averages tree result to improve performance and control over-fitting. Decision tree can be nicely visualized uses binary tree structure with each node making a decision depending upon the value of the feature. This paper concludes with a comparative evaluation of Naive Bayes, Logistic Regression, Decision tree and Random Forest in the context of Pima Indian Diabetes Dataset( take from UCI repository) in order to predict the diabetic patients. Keywords: Naive Bayes, Logistic Regression, Random Forest, Classification, Decision tree

22 citations


Journal ArticleDOI
TL;DR: The primary purpose of this paper is to look at and compare well performing algorithms such as Naive Bayes, decision tree (J48), Random Forest, Naïve Bayes Multiple Nominal, K-star and IBk.
Abstract: Today that collecting data has been easy more than ever in almost all aspects of life, but the collected data is of no use if it can’t be efficiently utilised for the betterment of the society. Every year thousands of students graduate from our education system which people believe is not as optimal as it could be and there has been a considerable research on how to improve it. In light of this the primary purpose of this paper is to look at and compare well performing algorithms such as Naive Bayes , decision tree (J48), Random Forest, Naive Bayes Multiple Nominal, K-star and IBk. Data that we have to gauge students’ potential based on various indicators like previous performances and in other cases their background to give a comparative account on what method is the best in achieving that end. The benefits from this are not limited to the students but help us evolve the system and gain knowledge into what method is the most efficient. All educational institutions whether public or private can design curriculum and the method of teaching based on what is the most effective. Keywords: Prediction, classification, student, marks, GPA, data mining, educational data mining, performance

Journal ArticleDOI
TL;DR: Different phases of SDLC, software quality, qualities of well engineered software and factors affecting software quality are included.
Abstract: Software Development Life Cycle (SDLC) is an important concept used in software engineering to describe a procedure for planning, creating, coding, testing and implementation of user requirement specification. Software development life cycle applies to a range of hardware and software configurations. SDLC is step by step process for creating quality software for users. It involves different phases that are followed one after one, that are essential for software engineers such as planning, analysis, design, coding, testing and implementation. In the early years, hardware was costly and software relatively cheap. In digital era, hardware is cheap and software is expensive. So the costs of hardware and software have been reversed due to increased demand of well engineered software products. This paper includes different phases of SDLC, software quality, qualities of well engineered software and factors affecting software quality. Key words: SDLC, Phases, Software Quality, Factors

Journal ArticleDOI
TL;DR: This research is to understand how machine learning can be used in digital crime and its forensic importance, setting up an environment to train artificial neural networks and investigate as well as analyze them to find artefacts that can be helpful in forensic investigation.
Abstract: The objective of this research is to understand how machine learning can be used in digital crime and its forensic importance, setting up an environment to train artificial neural networks and investigate as well as analyze them to find artefacts that can be helpful in forensic investigation.

Journal ArticleDOI
TL;DR: It is useful to investigate how different factors, such as workload, data size and number of simultaneous sessions influence scaling capabilities.
Abstract: Now a day the technology is growing rapidly stimulating and generating whopping amount of data. Every day people and companies generate huge amounts of data and this data may be unstructured, semi-structured and structured. That’s why we need to design databases which can store this type of data in huge volumes. The name of this database is NoSQL databases. NoSQL database solves this type of problems. NoSQL database is being used widely and it is a commonly known as engines well scale. Therefore, it is useful to investigate how different factors, such as workload, data size and number of simultaneous sessions influence scaling capabilities. In this paper we describe the brief introduction of NoSQL and its categories and also what the benefits of NoSQL are and why we are using now. Keywords: NoSQL, Graph DB, Key value DB, Column DB, Document DB

Journal ArticleDOI
TL;DR: This research analyzed default artifacts location, history, cookies, login data, topsides, shortcuts, user profile, prefetch file and RAM dump to collect artifacts related to internet activities on windows installed Google Chrome.
Abstract: Internet users use the web browser to perform various activities on the internet such as browsing internet, email, internet banking, social media applications, download files- videos etc. As web browser is the only way to access the internet and cybercrime criminal uses or target the web browser to commit the crime related to internet. It is very important for the digital forensic examiner to collect and analysis artifacts related to web browser usage of the suspect. There are various browsers available in the market such as Google Chrome, Internet Explorer, Firefox Mozilla, Safari and Opera etc, among which Google Chrome is very popular among the internet user community. Our literature survey shows that most of the researches used prefetch file and live memory analysis as source of information to extract artifacts. In this research paper, we analyzed default artifacts location, history, cookies, login data, topsides, shortcuts, user profile, prefetch file and RAM dump to collect artifacts related to internet activities on windows installed Google Chrome. The outcome of this research will serve to be a significant resource for law enforcement, computer forensic investigators, and the digital forensics research community.

Journal ArticleDOI
TL;DR: An improved and enhanced homomorphic filtering technique using wiener filter to avoid blurring and to increase the computational time of execution is presented and shows better results in terms of numerical and visual comparison.
Abstract: Speckle noise is the universal distortion problem in synthetic aperture radar (SAR) images. Despeckling method resolves this problem. This article presents an improved and enhanced homomorphic filtering technique using wiener filter to avoid blurring and to increase the computational time of execution. The proposed method is enhanced by applying log transform after the discrete wavelet transform (DWT) only at the detailed part. This saves the overall time of computational process and the results are also satisfactory and easily acceptable. This scheme is proposed in db2 type wavelet transform. Wiener filter is applied on the approximate part. The performance of the scheme is evaluated by calculating PSNR, SSIM and computational time. The performance of the resulting improved homomorphic filtering technique is compared with some standard methods and filters, and it shows better results in terms of numerical and visual comparison. Keywords: homomorphic filtering; dwt; wiener filter; speckle noise; SAR image

Journal ArticleDOI
TL;DR: An Artificial Intelligence Based Digital Forensics Framework is proposed in this paper to overcome above issues and require minimum user interaction and does majority of routine operations by intelligence acquired from training.
Abstract: With increase in number of Internet and smartphone users, cyber crimes are equally increased. Current resources including man power are not sufficient to investigate and solve cyber crimes with the pace they are committed. Present tools and technology require human interaction at large scale, which slows down the process. There is acute need to optimize speed and performance of Digital Forensic Tools to keep pace with the reported cyber crimes. An Artificial Intelligence Based Digital Forensics Framework is proposed in this paper to overcome above issues. The framework proposed in this paper require minimum user interaction and does majority of routine operations by intelligence acquired from training. Outcome of the work is mentioned in the form of proposed framework to optimize digital forensics process.

Journal ArticleDOI
TL;DR: In this paper authors did experiment with two different datasets on WEKA tool based on six parameters which illustrate disparity in the value with the type of dataset namely balanced and unbalanced.
Abstract: Data mining is a process of exploring unexplored patterns from huge databases. This acts as a key to knowledge discovery which provides a great support to business world and academia. To make this knowledge discovery happening various data mining tools are developed. These tools provide interface to get data and to retrieve some interesting patterns out of it which are further useful to attain new knowledge. There are variety of parameters defined in the literature which provide base for a tool to perform analysis and different tools are available to perform these analysis. This is quite interesting to perform a comparative analysis of these tools and to observe their behavior based on some selected parameters which will further be helpful to find the most appropriate tool for the given data set and the parameters. In this paper authors did experiment with two different datasets on WEKA tool based on six parameters which illustrate disparity in the value with the type of dataset namely balanced and unbalanced.

Journal ArticleDOI
TL;DR: This paper aims at Detecting Diabetes with PIMA Indian Diabetes Data-set with the help of Machine Learning techniques-Support Vector Machine and Decision Trees respectively and results are discussed.
Abstract: This paper aims at Detecting Diabetes with PIMA Indian Diabetes Data-set PIMA India is concerned with women’s health The risk of developing diabetes in Women is quite high due to various factors Hence, the idea is to Detect and Predict this Disorder with the help of Machine Learning techniques-Support Vector Machine and Decision Trees respectively The advantage of using these techniques is that it helps in automation of process and makes tasks like Classification, Clustering simpler The Paper begins with the introduction and emphasize on the worst effect of the Diabetes by explaining various disorders associated with it brief Literature Survey is done to study the work done in itThen,Section 3 describes the Proposed Approach with Pseudo Code in R Framework The Framework is used here is R Studio for better analysis and Visualizations Finally, Results are discussed with Conclusion and Future Scope

Journal ArticleDOI
TL;DR: This paper is focused on prerequisites of crawler, process of crawling and different types of crawlers, which are able to get used to a wide range of configurations without including additional hardware.
Abstract: Today's search engines are equipped with dedicated agents known as “web crawlers” keen to crawling large web contents online which are analyzed and indexed and make the content available to users. Crawlers act together with thousands of web servers over periods expanding from weeks to several years. These crawlers visits several thousands of pages every second, includes a high-performance fault manager, are platform independent or dependent and are able to get used to a wide range of configurations without including additional hardware. This paper is focused on prerequisites of crawler, process of crawling and different types of crawlers. This paper give review about some potential issues related to crawler, applications and research area of web crawler. Keywords: search engine, web crawler, www, Indexing, website analysis,

Journal ArticleDOI
TL;DR: The growth of cloud computing, its benefits and problem, features of common cloud computing services, and the various issues to be considered in selecting the most appropriate cloud computing service for academic institutions are reviewed.
Abstract: Cloud computing comprises of cooperation and coordination of different computing services to provide very high computing power for acquisition and analysis of wide spread out sources of data. It greatly helps in lowering the cost while maximizing the ability to process the required information. With cloud computing universities and other institutions can better manage their labs, research facilities, classrooms and libraries, etc. The students and teachers can benefit far better from it in individual and collaborative work because of easy access features provided by the cloud. This paper reviews the growth of cloud computing, its benefits and problem, features of common cloud computing services, and the various issues to be considered in selecting the most appropriate cloud computing service for academic institutions. The paper also discusses the importance of virtualization in growth of cloud computing.

Journal ArticleDOI
TL;DR: A study of various research paper that explore the area of text mining including different document representation methods and their impact on clustering and classification results is presented.
Abstract: Text data is the most common form of storing information When engine search an query, user obtained the large collection of text data All this retrieve text data are not relevant to the required information So, it needs to organise the massive amount of text data Analysing and processing the text data is mainly considered in text mining Text mining uses the standard data mining methods- classification and clustering These two methods are used to arrange the documents which are usually represented by hundreds or thousands of texts (words) data Text data in the document can be represented in various representation methods In this paper, we have presented a study of various research paper that explore the area of text mining including different document representation methods and their impact on clustering and classification results

Journal ArticleDOI
TL;DR: The main purpose of this paper is to explain some of important SDLC models like Waterfall Model, Iterative Model, Spiral Model, V-Model, Big Bang Model, Agile Model, Rapid Application Development Model and Software Prototype.
Abstract: The software development life cycle (SDLC) is used to design, develop and produce high quality, reliable, cost effective and within time software products in the software industry. This is also called software development process model. There are different SDLC process models are available. In this paper I have tried to describe different SDLC models according to their best use. There are many papers which have written in this regard. I will also use their knowledge or findings in this paper. The main purpose of this paper is to explain some of important SDLC models like Waterfall Model, Iterative Model, Spiral Model, V-Model, Big Bang Model, Agile Model, Rapid Application Development Model and Software Prototype. The main purpose of this paper is to explain advantages and disadvantages of these SDLC models. I will also describe which SDLC model is best fit for which type of software applications. Keywords: Waterfall Model, Agile Model, RAD, Software Prototype

Journal ArticleDOI
TL;DR: A smart intelligen t device which automatically senses information and helps women in “Every single step of life”, which is the integration of multiple devices comprises of a wearable smart-band and a secret webcam connected via Blu etooth which continuously tracks the information and communicates wi th smart phones that has access to the internet.
Abstract: Women play vital roles in our society from their birth to the end of life. In the past few years, crime against women has increased to a great extent. According to the survey it is found that 84 per cent of the women who experienced harassment were in the age group of 25 to 35 years, who ar e mostly fulltime workers and students. Most of the women also don’t conc entrate on th eir health due to th eir busy schedule. Women safety and security is a serious and biggest social issue and most important ly hurting the half population of the countr y in all aspects which needs to be solved urgently . Sin ce no one can respond ap tly in critical situations, we propose a smart intelligen t device which automatically senses information and helps women in “Every single step of life”. The device is the integration of multiple devices comprises of a wearable smart-band and a secret webcam connected via Blu etooth which continuously tracks the information and communicates wi th smart phones that has access to the internet. The application is programmed and embedded in such a way that tracks information of the women such as call log, messages, movement, p ulse measurement, blood ox ygen levels, heart beat rating and r ecords continuously in th e internet. When SOS present in smart band is pressed continuously it automatically generates signals to the predefined smart phones and nearest police station along with location coordinates and the secret webcam in the locket captures the culprit photo which is directly uploaded to the server.

Journal ArticleDOI
TL;DR: This work attempts to detect the stego images created by WOW algorithm by steganalysis on images, based on the classification of selected Hybrid image feature sets, using Gini Index as the feature selection algorithm on the combined features of the Chen, SPAM and Ccpev.
Abstract: Image steganography techniques can be classified into two major categories such as spatial domain techniques and frequency domain techniques. In spatial domain techniques the secret message is hidden inside the image by applying some manipulation over the different pixels of the image. This work attempts to detect the stego images created by WOW algorithm by steganalysis on images, based on the classification of selected Hybrid image feature sets. It uses Gini Index as the feature selection algorithm on the combined features of the Chen, SPAM and Ccpev. The main scope of this work is to compete the previously implemented SVM-spam and SVM-HT methods. It uses the standard classification performance metrics to evaluate the performance of the three of the steganalysis models SVM-spam, SVM-HT and SVM-HG (SVM with Hybrid features of Gini Index).

Journal ArticleDOI
TL;DR: This work takes some district of Tamil Nadu in India to analyze the soil nutrients and chooses Nitrogen, Phosphorus, Potassium, Calcium, Magnesium, Sulfur, Iron, Zinc, and so forth, nutrients for investigating the soil supplements utilizing Naives Bayes, Decision Tree and Hybrid approach of Naive Bayes and Decision Tree.
Abstract: Data mining methods are greatly admired in the research field of agriculture. The agriculture factors weather, rain, soil, pesticides and fertilizers are the main responsible aspect to raise the production of yields. The fundamental basic key aspect of agriculture is Soil for crop growing. Examination of soil is a noteworthy part of soil asset management in horticulture. The soil investigation is exceptionally useful for cultivators to discover which sort of harvests to be developed in a specific soil condition. The main target of this work is to investigate soil supplements utilizing data mining classification techniques. A large data set of soil nutrients status database was collected from the Department of Agriculture, Cooperation and Farmers Welfare. The database contains measurement of soil nutrients for all different states. This work takes some district of Tamil Nadu in India to analyze the soil nutrients. Distinctive sort's soil has diverse variety of supplements. This paper chooses Nitrogen, Phosphorus, Potassium, Calcium, Magnesium, Sulfur, Iron, Zinc, and so forth, nutrients for investigating the soil supplements utilizing Naive Bayes, Decision Tree and Hybrid approach of Naive Bayes and Decision Tree. The performance of the classification algorithms are compared based on the following two factors: accuracy and execution time.

Journal ArticleDOI
TL;DR: The study aims to develop a simulation method to find out fire spread and direction according to the climate data like wind direction, wind speed, rainfall, forest fuel type & density, canopy cover and other maps required for it.
Abstract: Forest fire is one of the major hazards causing destruction to biodiversity, environment and humans in the Taradevi forest of Himachal Pradesh (India). The study was carried out for forest fire spread analysis and loss assessment using simulation modeling techniques. Knowledge of the factors which includes forest types, canopy cover, meteorological status, topographic feature accelerating forest fire were taken into considerations. The parameters derived from remote sensing data and Geographical Information System (GIS) were used to generate input files for forest fire simulation modeling using FARISTE. Finally, a fire spread maps and fire areas were predicted in this simulation, where relative importance is given to each theme built based on GIS and climate parameters. The study aims to develop a simulation method to find out fire spread and direction according to the climate data like wind direction, wind speed, rainfall, forest fuel type & density, canopy cover and other maps required for it. Findings of the research are helpful in development of forest fire management. Fast and appropriate direction will be used by the management to stop the spread of fire effectively. It helps to provides effective means for protecting forests from fires as well as to formulate appropriate methods to control and manage forest fire damages and its spread.

Journal ArticleDOI
TL;DR: The heuristic algorithms which are used to solve the Travelling Salesperson Problem are reviewed & account for optimal solution in case of smaller problem size & give sub-optimal solution for bigger problem size.
Abstract: Travelling Salesperson Problem (TSP) is one of the leading problems that are considered as an NP-hard. To tackle with this problem we don’t have any best suitable algorithm that solves it in polynomial time. Although we have certain algorithms that gave better results. This paper reviews the heuristic algorithms which are used to solve the problem & account for optimal solution in case of smaller problem size & give sub-optimal solution for bigger problem size. A survey for each & every strategy used for solving TSP i.e. how they are modified with time & corresponding results obtained as per the modification. Here we take into account well recognized heuristic algorithms which are genetic algorithm, ant colony optimization, particle swarm optimization.

Journal ArticleDOI
TL;DR: A survey of recent work done in the Hindi language along with a new proposed approach for sentiment analysis of Hinglish (Hindi + English) text is provided.
Abstract: Sentiment analysis is a popular field of research in text mining. It involves extracting opinions from text such as reviews, news data, blog data etc. and then classifying them into positive, negative or neutral sentiments. Sentiment analysis of English has been explored but not much work has been done for Indian language. Some research has been carried in Hindi, Bengali, Marathi and Punjabi languages. Nowadays, a lot of communication in social media happens using Hinglish text which is a combination of two languages Hindi and English. Hinglish is a colloquial language which is very popular in India as people feel more comfortable speaking in their own language. This paper provides a survey of recent work done in the Hindi language along with a new proposed approach for sentiment analysis of Hinglish (Hindi + English) text.

Journal ArticleDOI
TL;DR: A Raven Roosting Optimization Algorithm (RRO) is followed to light on the load balancing for task scheduling problems solution in cloud environment and there is the possibility that simulation results shows better makespan, average response time, average waiting time, number of tasks migrated through Raven Roasting Optimized Algorithm.
Abstract: In this paper, a Raven Roosting Optimization Algorithm (RRO) is followed to light on the load balancing for task scheduling problems solution in cloud environment. Heterogeneity of birds, insects enroll in roosting. In raven Roosting, Roosts are information centers or can say servers and scrounge feature of common ravens inspired to solve problems. This technique is good enough to handle number of overloaded tasks transfer on Virtual Machines (VMs) by determining the availability of VMs capacity. Raven Roosting Optimization (RRO) random allocation of VMs to Cloudlets results huge change in makespan with respect to VM to which allocated. There is the possibility that simulation results shows better makespan, average response time, average waiting time, number of tasks migrated through Raven Roosting Optimization Algorithm.