scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Information Technology in 2019"


Journal ArticleDOI
TL;DR: The evaluation of the results shows that LSTM is able to outperform traditional machine learning methods for detection of spam with a considerable margin.
Abstract: Classifying spam is a topic of ongoing research in the area of natural language processing, especially with the increase in the usage of the Internet for social networking. This has given rise to the increase in spam activity by the spammers who try to take commercial or non-commercial advantage by sending the spam messages. In this paper, we have implemented an evolving area of technique known as deep learning technique. A special architecture known as Long Short Term Memory (LSTM), a variant of the Recursive Neural Network (RNN) is used for spam classification. It has an ability to learn abstract features unlike traditional classifiers, where the features are hand-crafted. Before using the LSTM for classification task, the text is converted into semantic word vectors with the help of word2vec, WordNet and ConceptNet. The classification results are compared with the benchmark classifiers like SVM, Naive Bayes, ANN, k-NN and Random Forest. Two corpuses are used for comparison of results: SMS Spam Collection dataset and Twitter dataset. The results are evaluated using metrics like Accuracy and F measure. The evaluation of the results shows that LSTM is able to outperform traditional machine learning methods for detection of spam with a considerable margin.

101 citations


Journal ArticleDOI
TL;DR: A deep convolution neural network framework for predominant instrument recognition in real-world polyphonic music is accomplished and the research excellent result with 92.8% accuracy.
Abstract: Musical instruments identification in polyphonic is a challenge in music information retrieval. In proposed work, a deep convolution neural network framework for predominant instrument recognition in real-world polyphonic music is accomplished. The network is trained on fixed-length music with a labeled predominant instrument and estimate an arbitrary number of instruments from an audio signal with variable length. The Mel spectrogram representation is used to map audio data into the matrix format. This work used eight layer convolution neural network for instrument recognition. ReLu activation function is used for the scaling of training data and introduces non-linearity in the network. At each layer, Max Pooling function is used for the dimension reduction. For the regularization, dropout is used which prevent the output from getting overfitting. The Softmax function gives the probability of particular instruments. The research excellent result with 92.8% accuracy.

64 citations


Journal ArticleDOI
TL;DR: The Hajj, which has witnessed several stampedes, is chosen as the case study and the findings would be applicable in other events like the Kumbh Mela.
Abstract: During the first 15 years of this century, seven thousand people have been crushed to death in stampedes. Many would argue that these fatalities could have been prevented by better control and management. Crowd management today needs to minimise the chances of occurrence of stampedes, fires and other disasters and also to deal with the ongoing threat of terrorism and outbreak of communicable diseases like EBOLA, HIV Aids, Swine Influenza H1N1, H1N2, various strands of flu, Severe Acute Respiratory Syndrome (SARS) and Middle Eastern Respiratory Syndrome (MERS). These challenges have created a need for using all available resources, especially modern tools and technology, when dealing with crowds. Radio Frequency Identification (RFID), which is already benefiting many industrial and government organisations around the world, may be useful for scanning crowded locations and hence in helping to prevent overcrowding. Other wireless technologies should also be considered for possible use in crowded events. Ideally, some of the regular crowded event locations should be transformed into smart cities. In this article we shall discuss different kinds of crowds and technologies for their management. In particular, we shall analyse cases where wireless and mobile technologies can be utilised effectively. The Hajj, which has witnessed several stampedes, is chosen as the case study but most of our findings would be applicable in other events like the Kumbh Mela.

47 citations


Journal ArticleDOI
TL;DR: Simulation results show that the proposed system performs better than other similar approaches when compared with specific network parameters, and an improved security feature has been added to the mobile stations in a registered group to eliminate the unnecessary utilization of resource by unauthorized station.
Abstract: Resource provision and security requirement for large-scale cloud applications is a challenging issue while design of any web based client oriented business application. Extensive research on various issues in real environment has reported that on-demand provision of resources in cloud where connectivity issue persists for heterogeneous communication channel, requires developers to consider network infrastructure and the environment, which is beyond certain control. In wireless mobile network, the network condition is always changeable and cannot be predicted neither controlled. In this paper resource provisioning and checking of continuous availability of resource to the clients has been carried out using a web based application software that uses cloud servers and data centers. Secondly, an improved security feature has been added to the mobile stations in a registered group to eliminate the unnecessary utilization of resource by unauthorized station which maliciously consumes bandwidth and other facility provided by the cloud provider. Simulation results show that the proposed system performs better than other similar approaches when compared with specific network parameters.

40 citations


Journal ArticleDOI
TL;DR: A very trendy routing protocol, ad hoc on-demand distance vector routing (AODV) is very susceptible to black hole attacks and it is analyzed to position a solution based on analysis.
Abstract: The mobile ad hoc networks (MANETS) are decentralized, multi-hop networks in which the intermediate nodes play an role of routers to pass data packets to destination. The Routing protocols are playing very vital role in effectiveness of MANETS due to mobility and dynamically changing of topology. Due to the nature of broadcast wireless medium and not have central control now many routing protocols are vulnerable to attacks. These attacks are black hole attack, greyhole attack, sinkhole attack, byzantine attack, sleep deprivation attack and wormhole attack. The paper is discussing all these attacks. From these routing protocols, one of protocol is ad hoc on-demand distance vector routing (AODV). It is a very trendy routing protocol and it is very susceptible to black hole attacks. In black hole attack a mobile node mistakenly publicize the route and sinks data packets to incorrect destination instead of sending to accurate destination. It analyzes the related work and position a solution based on analysis.

38 citations


Journal ArticleDOI
TL;DR: This paper discusses system model for human’s normal and abnormal activities recognition along with various feature selectors and detectors used in previous literature and conducts a review of benchmark researches.
Abstract: There is a strong demand of smart vision based surveillance system owing to the increase in crime at a frightening rate at various public places like Banks, Airport, Shopping malls and its application in human activity recognition ranges from patient fall detection, irregular pattern recognition or Human computer Interaction. As the crime increases at a disturbing rate, public security violations and high cost of security personals have motivated the author to do the strategic survey of existing vision and image processing based techniques in the past literature. The paper begins with discussing the common approach towards suspicious activity detection and recognition followed by summarizing the supervised and unsupervised machine learning methodologies mainly based on SVM, HMM and ANN classifiers, which were adopted by the researchers previously varying from single human behavior modeling to crowded scenes. Next, this paper discusses system model for human’s normal and abnormal activities recognition along with various feature selectors and detectors used in previous literature. This was followed by conducting a review of benchmark researches which covered a comprehensive state of art methodologies in the related fields, key points owned, feature learning and applications. At last experimental aspects of various papers have been discussed with essential performance matrices like accuracy along with the major issues, common problems, challenges and future scope in the related field.

37 citations


Journal ArticleDOI
TL;DR: This study is intended towards early diagnosis of cancer using more efficient analytical techniques and accuracy plays an important role in prediction to improve the quality of care, thereby increasing the survival rate.
Abstract: The major challenge related to data management lies in healthcare sector due to increase in patients proportional to the population growth and change in lifestyle. The data analytics and big data are becoming trends to provide solution to all analytical problems that can be obtained by using machine learning techniques. Today, cancer is evolving as one of the major attention seeking phenomenon in developed as well as in developing countries that may lead to death if not diagnosed at the early stage. The late diagnosis, and hence delayed treatment increase the risk for the survival. Thus, early detection to improve the cancer outcome is very critical. This study is intended towards early diagnosis of cancer using more efficient analytical techniques. Moreover, accuracy plays an important role in prediction to improve the quality of care, thereby increasing the survival rate. For this study, the datasets are extracted from UCI Machine Learning Repository prepared by University of Wisconsin Hospitals. For the diagnosis and classification process, K Nearest Neighbor (KNN) classifier is applied with different values of K variable, introducing the process called KNN Clustering. Later the performance of KNN is compared with K-Means clustering on the same datasets.

35 citations


Journal ArticleDOI
TL;DR: A proactive predictive approach to mitigate sequence number attacks which discovers misbehaving nodes during route discovery phase is proposed and suggests modifications in Ad hoc on-demand distance vector (AODV) routing protocol.
Abstract: In recent years, wireless technologies have gained enormous popularity and used vastly in a variety of applications. Mobile Ad hoc Networks (MANETs) are temporary networks which are built for specific purposes; they do not require any pre-established infrastructure. The dynamic nature of these networks makes them more utilizable in ubiquitous computing. These autonomous systems of wireless mobile nodes can be set up anywhere and anytime. However, due to high mobility, absence of centralized authority and open media nature, MANETs are more vulnerable to various security threats. As a result, they are prone to more security issues as compared to the traditional networks. Ad hoc networks are highly susceptible to various types of attacks. Sequence number attacks are such hazardous attacks which greatly diminish the performance of the network in different scenarios. Sequence number attacks suck some or all data packets and discard them. In past few years, various researchers proposed different solutions for detecting the sequence number attacks. In this paper, first we review notable works done by various researchers to detect sequence number attacks. The review thoroughly presents distinct aspects of the proposed approach. In addition, we propose a proactive predictive approach to mitigate sequence number attacks which discovers misbehaving nodes during route discovery phase. The proposed approach suggests modifications in Ad hoc on-demand distance vector (AODV) routing protocol.

34 citations


Journal ArticleDOI
TL;DR: A design of a ‘spider robot’ which may be used for efficient cleaning of deadly viruses is provided and some of the emerging technologies which are causing remarkable breakthroughs and improvements which were inconceivable earlier are examined.
Abstract: Twenty first century has witnessed emergence of some ground breaking information technologies that have revolutionised our way of life. The revolution began late in 20th century with the arrival of internet in 1995, which has given rise to methods, tools and gadgets having astonishing applications in all academic disciplines and business sectors. In this article we shall provide a design of a 'spider robot' which may be used for efficient cleaning of deadly viruses. In addition, we shall examine some of the emerging technologies which are causing remarkable breakthroughs and improvements which were inconceivable earlier. In particular we shall look at the technologies and tools associated with the Internet of Things (IoT), Blockchain, Artificial Intelligence, Sensor Networks and Social Media. We shall analyse capabilities and business value of these technologies and tools. As we recognise, most technologies, after completing their commercial journey, are utilised by the business world in physical as well as in the virtual marketing environments. We shall also look at the social impact of some of these technologies and tools.

27 citations


Journal ArticleDOI
TL;DR: This work focuses on two types of ANN-feedforward back-propagation neural network and Elman neural network, applied to a dataset which contains project information of 21 projects based on ASD from 6 different software houses to solve the effort estimation problem (EEP) in ASD.
Abstract: Frequent requirement changes are a major point of concern in today’s scenario. As a solution to such issues, agile software development (ASD) has efficiently replaced the traditional methods of software development in industries. Because of dynamics of different aspects of ASD, it is very difficult to keep track, maintain and estimate the overall product. So, in order to solve the effort estimation problem (EEP) in ASD, different types of artificial neural networks (ANNs) have been applied. This work focuses on two types of ANN-feedforward back-propagation neural network and Elman neural network. These two networks have been applied to a dataset which contains project information of 21 projects based on ASD from 6 different software houses to analyze and solve the EEP. Also, the proposed work uses three different performance metrics i.e. mean magnitude of relative error (MMRE), mean square error (MSE) and prediction (PRED(x)) to examine the performance of the model. The results of the proposed models are compared to the existing models in the literature.

27 citations


Journal ArticleDOI
TL;DR: An exhaustive comparison of various techniques proposed by researchers to resolve virtualization specific vulnerabilities is provided and some light is shed on cloud shared responsibility model to decide which roles cloud service providers and cloud service customers play in cloud security.
Abstract: Virtualization is technological revolution that separates functions from underlying hardware and allows us to create useful environment from abstract resources. Virtualization technology has been targeted by attackers for malicious activity. Attackers could compromise VM infrastructures, allowing them to access other VMs on the same system and even the host. Our article emphasize on the assessment of virtualization specific vulnerabilities, security issues and possible solutions. In this article, a recent comprehensive survey on virtualization threats and vulnerabilities is presented. We also described taxonomy of cloud-based attacks on the virtualized system and existing defense mechanisms intended to help academia, industry and researchers to gain deeper and valuable insights into the attacks so that the associated vulnerabilities can be identified and subsequently required actions would be taken. We provide an exhaustive comparison of various techniques proposed by researchers to resolve virtualization specific vulnerabilities. To guide future research, we discussed generalized security measures and requirements to be taken to achieve secure virtualized implementations. At the end, we shed some light on cloud shared responsibility model to decide which roles cloud service providers and cloud service customers play in cloud security. The aim of this article is to deliver researchers, academicians and industry with a superior understanding of existing attacks and defense mechanisms on cloud security.

Journal ArticleDOI
TL;DR: This paper addresses above issues and many more to realize the bottlenecks of big data and believes that appropriate research in big data will lead to a new wave of advances that will revolutionize the market and the future analysis platforms, services as well as products and will tackle all the challenges.
Abstract: Big data is an emerging torrent. We are held up in a Lake of data and its intensity is continuously increasing. With the fast growth of promising applications like social media, web, mobile services, and other applications across various organizations, there is a rapid growth of data. Thus arises the notion of “Big Data”. Data analysis, querying, storage, retrieval organization and modeling are the fundamental challenges associated with it. These challenges are posed due to the fact that big data is complex in nature. In this paper, we address above issues and many more to realize the bottlenecks. But we believe that appropriate research in big data will lead to a new wave of advances that will revolutionize the market and the future analysis platforms, services as well as products and will tackle all the challenges.

Journal ArticleDOI
TL;DR: The proposed system can successfully detect and examined disease with accuracy of 89.60%.
Abstract: The present study is based on the image processing techniques to identify and classify fungal rust disease of Pea. Rust disease is caused by Uromyces fabae (Pers.) de Bary in the form of rust-colored pustules on the leaves. The plant disease detection is limited by human visual capabilities due to microscopic symptoms of the disease and for that image processing techniques seems to be well adapted. The goal of this paper is to detect, to identify the early symptoms of rust disease at the microscopic level. The performance of various preprocessing, feature extraction and classification techniques was evaluated on microscopic images. Finally support vector machine classifier was used to detect the leaf disease of Pea Plant. The proposed system can successfully detect and examined disease with accuracy of 89.60%. Focus has been done on the early detection of rust disease at microscopic level which avoids spreading of disease not only on the whole plant but also to the other plants.

Journal ArticleDOI
TL;DR: A solution to the crime prediction problem using Naive Bayes classifier, which includes finding the most likely criminal of a particular crime incident when the history of similar crime incidents has been provided with the incident-level crime data is introduced.
Abstract: Nowadays crimes are increasing at a high rate which is a great challenge for the police department of a city. A huge amount of data on different types of crimes taking place in different geographic locations is collected and stored annually. It is highly essential to analyze data so that potential solutions for solving and mitigating the crime incidents and predicting similar incident patterns for future becomes possible. Then it can be carried out using big data and various machine learning techniques in conjunction. The paper introduced a solution to the crime prediction problem using Naive Bayes classifier, which includes finding the most likely criminal of a particular crime incident when the history of similar crime incidents has been provided with the incident-level crime data. The incident-level crime data is provided as a crime dataset which includes incident date and location, crime type, criminal ID and the acquaintances are the attributes or crime parameters. The acquaintances are the suspects whose names are either directly involved in the incident or indirectly the acquaintances of the criminal. Acquiring a real-time crime dataset is a difficult process in practice due to confidentiality principle. So, crime dataset are used for the inputs using the state of the art methods. The proposed system is tested for the crime prediction problem using the data learning, and the experimental results show that the proposed system provides better results and finding of the potential solutions and crime patterns.

Journal ArticleDOI
TL;DR: A hybrid method based on coupling Discrete Wavelet Transforms and Artificial Neural Network (ANN) for Intrusion Detection is proposed and the experimental results advocate about the fact that the proposed model has higher accuracy, detection rate and at the same time has reduced false alarms making it suitable for real-time networks.
Abstract: Network Intrusion Detection is the process of analyzing the network traffic so as to unearth any unsafe and possibly disastrous exchanges happening over the network. In the nature of guaranteeing the confidentiality, availability, and integrity of any networking system, the accurate and speedy classification of the transactions becomes indispensable. The potential problem of all the Intrusion Detection System models at the moment, are lower detection rate for less frequent attack groups, and a higher false alarm rate. In case of networks and simulation works signal processing has been a latest and popular technique. In this study, a hybrid method based on coupling Discrete Wavelet Transforms and Artificial Neural Network (ANN) for Intrusion Detection is proposed. The imbalance of the instances across the data-set was eliminated by SMOTE based oversampling of less frequent class and random under-sampling of the dominant class. A three-layer ANN was used for classification. The experimental results on KDD99 data-set advocate about the fact that the proposed model has higher accuracy, detection rate and at the same time has reduced false alarms making it suitable for real-time networks.

Journal ArticleDOI
TL;DR: A qualitative analysis of all vulnerabilities and related threats corresponding to each service model of cloud computing, and countermeasures have been proposed to enhance the security in Cloud computing.
Abstract: Nowadays, cloud computing is considered as most cost-effective platform which provides business and consumer services in IT over the Internet. But security is recognized as the main stammer block for wider adoption due to outsourcing of services from third party. Keeping in view the same, security issues in three service models of cloud computing namely SaaS, PaaS, and IaaS have been discussed. The present paper provides a qualitative analysis of all vulnerabilities and related threats corresponding to each service model. In last section countermeasures have been proposed to enhance the security in Cloud computing.

Journal ArticleDOI
TL;DR: This manuscript proposes an approach for personalizing the web search using techniques, applications and opportunities of Data mining and web mining and provides the qualitative results based on web user behavior.
Abstract: Web search engines assist users to discover useful information on the World Wide Web. However, when the same query is submitted by different users, typical search engines provide the identical results regardless of who submitted the query. The search engines usually provide search results without considering user context or interests. Ecommerce web search personalization is an emerging area in the research which has already been achieved the curiosity. Anticipating the web user needs is one of the key factors of this framework and provides the qualitative results based on web user behavior. In this manuscript; we propose an approach for personalizing the web search using techniques, applications and opportunities of Data mining and web mining. Personalization of ecommerce has become a trend of current business scenario to overcome the problem of product overload, to appease the customers with special treatment offerings. Over the last 10 years it has been adopted as a skillful strategy to give enhanced service to the web clients and sustain in the market.

Journal ArticleDOI
TL;DR: A zonal based clustering technique is proposed wherein the field is divided into zones and the selection of cluster head is dynamic so as to balance the load with even dissipation of power by the deployed Sensor Node.
Abstract: Huge attention of the researchers is drawn by the wireless sensor network (WSN) due to its applicability in variety of applications. WSN has tiny size low powered device called sensors with an objective to monitor the area of interest. In many applications of WSN, it is impossible to modify the topology or to replace the battery based power supply of the sensor nodes. Hence elongation of network lifetime is required to meet the objective of setting up the network. In this paper, a zonal based clustering technique is proposed wherein the field is divided into zones. The selection of cluster head is dynamic so as to balance the load with even dissipation of power by the deployed Sensor Node. The proposed work is compared with DEEC, SEP, Z-SEP and LEACH protocol and simulation validates the protocol with elongated stability region and extended life time with more successful packet delivery to base station.

Journal ArticleDOI
TL;DR: In this paper, particle swarm optimization (PSO) technique has been applied with recurrent neural network long short-term memory (LSTM) algorithm to find an optimal architecture for feed forward neural network.
Abstract: Designing an optimal neural network architecture plays an important role in the performance of a neural network model. In the past few years, various bio-inspired optimization techniques have been applied to find the optimal architecture of a neural network model. In this paper particle swarm optimization (PSO) technique has been applied with recurrent neural network long short-term memory (LSTM) algorithm to find an optimal architecture for feed forward neural network. To optimize the architecture of neural network model Parameters considered are hidden neurons, learning rate and activation function. Fitness function applied for the selection of the optimal combination of the parameters is root mean square error (RMSE). Due to privatization of education number of private institutes and universities are increasing rapidly every year. This increase has resulted in huge number of data (NAAC reports) regarding the assessment and accreditation of higher education institutions. Dataset of 500 educational institutes has been collected from the official site of National Assessment and Accreditation Council (NAAC). Hybrid model of PSO with LSTM algorithm has been applied to the educational dataset. Selection of the optimal architecture is done on the basis of RMSE, Accuracy and other performance parameters.

Journal ArticleDOI
TL;DR: It is shown with the help of confirmatory experiments that MRR and SR are improved by 103.25 and 32.11% respectively by employing Taguchi-fuzzy approach for parametric optimisation of electric discharge machining with multiple performance measures.
Abstract: This paper employs Taguchi-fuzzy approach for parametric optimisation of electric discharge machining with multiple performance measures. In this work, seven input parameters (one of two levels and six of three levels) and two performance measures have been considered and the experiments are designed using Taguchi’s L36 orthogonal array. A fuzzy model is formed using mamdani inference system and optimal combination of process parameters has been obtained on basis of multi-performance fuzzy index (MPFI) value calculated using different shapes of membership functions (MF) viz. triangular, trapezoidal and gaussian. Gaussian MF is found to provide better results as compared to triangular and trapezoidal MF. ANOVA analysis has also been carried out on MPFI to find out percentage contribution. It is shown with the help of confirmatory experiments that MRR and SR are improved by 103.25 and 32.11% respectively by employing the proposed approach.

Journal ArticleDOI
TL;DR: The conclusive results show that the soft computing approach has the propensity to identify faults in the process of software development.
Abstract: In the process of software development, software fault prediction is a useful practice to ensure reliable and high quality software products. It plays a vital role in the process of software quality assurance. A high quality software product contains minimum number of faults and failures. Software fault prediction examines the vulnerability of software product towards faults. In this paper, a comparative analysis of various soft computing approaches in terms of the process of software fault prediction is considered. In addition, an analysis of various pros and cons of soft computing techniques in terms of software fault prediction process is also mentioned. The conclusive results show that the soft computing approach has the propensity to identify faults in the process of software development.

Journal ArticleDOI
TL;DR: This paper focuses on three widely popular cryptographic algorithms: AES, DES and Blowfish, well known symmetric key cryptographic algorithms useful in providing security to IT systems.
Abstract: The value of data stored on digital platforms are growing very rapidly. Also, most of the information systems are networked based having lots of resources like data, software applications and business logics which are always susceptible to attacks. To provide security to such information systems, many cryptographic algorithms are available. This paper focuses of three such widely popular cryptographic algorithms: AES, DES and Blowfish. These are well known symmetric key cryptographic algorithms useful in providing security to IT systems. The main objective of research paper is to analyze the performance of these algorithms on small and large data files. Performance comparison is based on execution time and memory used by these algorithms during the implementation. Experimental results and graphical reports make clear which algorithm is more suitable for small and large data files. Analytical results also describes which algorithm is more suitable for time and memory constraint systems.

Journal ArticleDOI
TL;DR: A solution is defined based on convolutional neural networks (CNN) for the grading of flue-cured tobacco leaves based on Max pooling technique (MPT) to reduce the size of the model.
Abstract: In this paper, a solution is defined based on convolutional neural networks (CNN) for the grading of flue-cured tobacco leaves. A performance analysis of CNN on 120 samples of cured tobacco leaves is reduced from 1450 × 1680 Red–Green–Blue (RGB) to 256 × 256, consisting 16, 32 and 64 feature kernels for hidden layers respectively. The neural network comprised of four hidden layers where the performance of convolution and pooling on first three hidden layers and fourth layer a fully connected as in regular neural networks. Max pooling technique (MPT) is used in the proposed model to reduce the size. Classification is done on three major classes’ namely Class-1, Class-2 and Class-3 for obtaining global efficiency of 85.10% on the test set consisting about fifteen images of each cluster. A comparative study is performed on the results from the proposed model with existing models of tobacco leaf classification.

Journal ArticleDOI
TL;DR: This enhanced energy detection technique works well at low signal-to-noise ratio (SNR), which makes the communication system more power efficient and can be for low power applications.
Abstract: In order to increase the spectral efficiency of any communication systems, spectrum sensing techniques may be used for proficient utilization of inadequate spectrum resources. It identifies the unused spectrum holes, which is originally assigned to the primary users (PU). These spectrum holes are then assigned to the secondary or cognitive users with avoiding interference to the primary users. In this paper, a spectrum assignment technique based on energy detection technique is proposed. This enhanced energy detection technique works well at low signal-to-noise ratio (SNR), which makes the communication system more power efficient and can be for low power applications. Further, the performance of the proposed spectrum sensing method is examined for cognitive radio (CR) network. The performance of the proposed method is also examined by calculating the probability of detection, probability of false alarm and error probability in presence of additive Gaussian noise and the effect of different sensing parameters on the probability of error in detecting primary users are also evaluated.

Journal ArticleDOI
TL;DR: In this analysis, thyroid large and complex dataset is analyzed by data mining meta classifier algorithm: Boosting, Bagging, Stacking and Voting with new ensemble model and con comparing classification accuracy, sensitivity and specificity.
Abstract: Data mining algorithms provide easy way to solve problem in medical data analysis. Data mining supports in complex data analysis to identify each issue in dataset. Now-a-days every person suffers for a good heath. The life style of every person is very fast so it is very difficult to maintain his health. Every person cannot easily maintain the hormone system in the body. Nowadays hormone disturbance are major issue in ladies. The major issues behind thyroids are hormonal disturbance. Initially we do not care the symptom of thyroids. If we have some knowledge about thyroid symptom then prior of major problem we protect his life. The symptoms of thyroid are very similar so we easily can not eliminate for identification. Data mining provide major help in thyroid dataset with different algorithms for classification, clustering, association etc. We use different types of machine learning meta classifier algorithms for thyroid dataset classification. In this analysis we analyze thyroid large and complex dataset by data mining meta classifier algorithm: Boosting, Bagging, Stacking and Voting with new ensemble model and con comparing classification accuracy, sensitivity and specificity. We easily classify thyroid dataset in different class level. Thyroid is a very common disease found in the human body, which is related to human diet and daily living with the help of classification algorithms, they can be avoided by studying various types of functions in thyroid, with the help of classification, after the disease consisting of experienced doctor walking expert system can be developed.

Journal ArticleDOI
TL;DR: Detailed performance evaluation and structural analysis are performed in different aspects to authenticate the proposed circuits (one-bit and 4-bit) having superb performance in comparison to previously reported works.
Abstract: Quantum-dot cellular automata (QCA) is among the most promising nanotechnologies as the substitution for the current metal oxide semiconductor field effect transistor based devices. Therefore, lots of attention have been paid to different aspects to improve the efficiency of QCA circuits. In this way, the adder circuits are widely investigated since their performance can directly affect the whole digital system performance. In this paper, a new ultra-high speed QCA full adder cell is proposed based on multi-layer structures. The proposed full adder cell is simple in design using 3-input Exclusive-OR (TIEO), which computes the Sum bits and Majority gate, which computes the Carry bits. To verify the efficacy of the presented full adder cell, it is considered, the main constructing block in 4-bit ripple carry adder circuit. Hence, significant improvements in terms of area and cell count have been achieved. Particularly simulation results show 20% and 1.8% reduction respectively in the area and cell count overhead. Detailed performance evaluation and structural analysis are performed in different aspects to authenticate the proposed circuits (one-bit and 4-bit) having superb performance in comparison to previously reported works. QCADesigner CAD tool has been used to verify the correct functionality of the proposed architectures.

Journal ArticleDOI
TL;DR: This paper proposes a non-algorithmic technique for SDEE i.e. a hybrid model of wavelet neural network (WNN) and metaheuristic algorithm and it was observed that integratingMetaheuristic algorithms with WNN outperformed the results of software effort prediction in comparison to traditional WNN which is not optimized using any meta heuristic technique.
Abstract: Software development effort estimation (SDEE) means estimating software cost and effort at the early stage of software development. It is a difficult task as the characteristics of software to be developed are not known at the time of estimation of software effort. But this is very important for software development organizations as it ultimately leads to the project’s success. This paper proposes a non-algorithmic technique for SDEE i.e. a hybrid model of wavelet neural network (WNN) and metaheuristic algorithm. Two metaheuristic algorithms i.e. firefly algorithm and bat algorithm are used. The efficiency of WNN with integration of each of these metaheuristic algorithms is investigated. Two variants of wavelet functions—Morlet and Gaussian are used as activation functions in WNN. The proposed techniques are experimentally evaluated on PROMISE SDEE repositories. It was observed that integrating metaheuristic algorithms with WNN outperformed the results of software effort prediction in comparison to traditional WNN which is not optimized using any metaheuristic technique. The results are also statistically validated using a non-parametric statistical test using IBM SPSS tool.

Journal ArticleDOI
TL;DR: A study based on the combination of various feature extraction techniques for character recognition has been presented, by extracting statistical features in hierarchical order from the pre-segmented degraded offline handwritten Gurmukhi characters.
Abstract: Recognition of degraded offline handwritten characters of Gurmukhi script is very challenging task due to the complex structural properties of the script, which is not matter-of-fact in majority of other scripts. A study based on the combination of various feature extraction techniques for character recognition has been presented in this paper. By extracting statistical features in hierarchical order from the pre-segmented degraded offline handwritten Gurmukhi characters, the potential results are analyzed for the recognition. Four types of feature extraction techniques, namely, zoning, diagonal, peak extent based features (horizontally and vertically) and shadow features have been considered in the present study. For classification, three classifiers, specifically, k-NN, decision tree and random forest are employed to demonstrate the effect on the problem of degraded offline handwritten Gurmukhi character recognition. Authors have collected 8960 samples which are partitioned using the partitioning strategy and fivefold cross validation technique. In partitioning strategy, 80% of data is taken as the training dataset and remaining 20% data is considered as the testing dataset. Various parameters for performance measures such as recognition accuracy, false rejection rate (FRR), area under curve (AUC) and root mean square error (RMSE) are also used for analyzing the performance of features and classifiers.

Journal ArticleDOI
TL;DR: The result shows, AdaBoost with Hybrid KNN–RBFSVM as base estimator achieves improved accuracy and AUC value compared to above ensembles.
Abstract: Authors presents an effective breast mammogram classification technique by the application of computer assisted tools for augmenting human functions, namely radiologists. Authors present AdaBoost with RBFSVM and Hybrid KNN–RBFSVM as base estimator by adaptively adjusting kernel parameters in RFBSVM (γ and C). KNN–RBFSVM classifier is developed by adaptively adjusting the kernel parameters of SVM using optimized parameters of KNN. In the proposed Hybrid KNN–RBFSVM algorithm, first weighted KNN is applied on training mammograms. Initially equal weights are assigned to each mammogram and updated weights are found. The updated weights from KNN are used as initial weights for RBFSVM. This Hybrid KNN–RBFSVM algorithm is used as base estimator in AdaBoost with number of estimators = 200, for prediction of test mammogram as benign or malignant. GLCM features are extracted from mammograms. Inconsistent and irrelevent features of mammogram affect the classification accuracy. Authors use proposed PreARM algorithm for features optimization. The classification results of the AdaBoost with DT, KNN, RBFSVM and Hybrid KNN–RBFSVM as base estimator are compared in terms of accuracy and area under ROC curve value. DDSM mammogram image dataset is used for experiment. The result shows, AdaBoost with Hybrid KNN–RBFSVM as base estimator achieves improved accuracy and AUC value compared to above ensembles.

Journal ArticleDOI
TL;DR: The proposed algorithm considers both dynamic heterogeneities of workload and dynamic heterogeneity of resources to provide the better results than existing algorithm and has been simulated on CloudSim.
Abstract: Data and computational centres consume a large amount of energy and limited by power density and computational capacity. As compared with the traditional distributed system and homogeneous system, the heterogeneous system can provide improved performance and dynamic provisioning. Dynamic provisioning can reduce energy consumption and map the dynamic requests with heterogeneous resources. The problem of resource utilization in heterogeneous computing system has been studied with variations. Scheduling of independent, non-communicating, variable length tasks in the concern of CPU utilization, low energy consumption, and makespan using dynamic heterogeneous shortest job first (DHSJF) model is discussed in this paper. Tasks are scheduled in such a manner to minimize the actual CPU time and overall system execution time or makespan. During execution, the load is balanced dynamically. Dynamic heterogeneity achieves reduced makespan that increases resource utilization. Some existing methods are not designed for fully heterogeneous systems. Our proposed method considers both dynamic heterogeneities of workload and dynamic heterogeneity of resources. Our proposed algorithm provides the better results than existing algorithm. The proposed algorithm has been simulated on CloudSim.