scispace - formally typeset
Search or ask a question

Showing papers in "Journal of intelligent systems in 2019"


Journal ArticleDOI
TL;DR: An extended version of the TOPSIS method by using cubic information is constructed, and a numerical application is provided to verify and demonstrate the practicality of the method.
Abstract: Abstract In this paper, we construct an extended version of the TOPSIS method by using cubic information, and provide a numerical application to verify and demonstrate the practicality of the method. A new extension of the gray relation analysis (GRA) method is introduced by using cubic information. We also propose the cubic fuzzy multi-attribute group decision-making model, and the relation between the cubic TOPSIS method and the cubic gray relation analysis (CGRA) method is introduced. Finally, the proposed method is used for selection in sol–gel synthesis of titanium carbide nanopowders. We analyzed the proposed method by using a numerical application to sol–gel synthesis of titanium carbide nanopowders.

58 citations


Journal ArticleDOI
TL;DR: Experimental results show that the contextual-based SWN feature vector obtained through shiftPolarity approach alone led to an improved Twitter sentiment analysis system that outperforms the traditional reverse polarity approach by 2–6%.
Abstract: Abstract This paper addresses the problem of Twitter sentiment analysis through a hybrid approach in which SentiWordNet (SWN)-based feature vector acts as input to the classification model Support Vector Machine. Our main focus is to handle lexical modifier negation during SWN score calculation for the improvement of classification performance. Thus, we present naive and novel shift approach in which negation acts as both sentiment-bearing word and modifier, and then we shift the score of words from SWN based on their contextual semantic, inferred from neighbouring words. Additionally, we augment negation accounting procedure with a few heuristics for handling the cases in which negation presence does not necessarily mean negation. Experimental results show that the contextual-based SWN feature vector obtained through shift polarity approach alone led to an improved Twitter sentiment analysis system that outperforms the traditional reverse polarity approach by 2–6%. We validate the effectiveness of our hybrid approach considering negation on benchmark Twitter corpus from SemEval-2013 Task 2 competition.

48 citations


Journal ArticleDOI
TL;DR: A hybrid architecture of CNN-BLSTM is proposed to appropriately use spatial and temporal properties of the speech signal and to improve the continuous speech recognition task and overcome another shortcoming of CNN, i.e. speaker-adapted features, which are not possible to be directly modeled in CNN.
Abstract: Abstract Deep neural networks (DNNs) have been playing a significant role in acoustic modeling. Convolutional neural networks (CNNs) are the advanced version of DNNs that achieve 4–12% relative gain in the word error rate (WER) over DNNs. Existence of spectral variations and local correlations in speech signal makes CNNs more capable of speech recognition. Recently, it has been demonstrated that bidirectional long short-term memory (BLSTM) produces higher recognition rate in acoustic modeling because they are adequate to reinforce higher-level representations of acoustic data. Spatial and temporal properties of the speech signal are essential for high recognition rate, so the concept of combining two different networks came into mind. In this paper, a hybrid architecture of CNN-BLSTM is proposed to appropriately use these properties and to improve the continuous speech recognition task. Further, we explore different methods like weight sharing, the appropriate number of hidden units, and ideal pooling strategy for CNN to achieve a high recognition rate. Specifically, the focus is also on how many BLSTM layers are effective. This paper also attempts to overcome another shortcoming of CNN, i.e. speaker-adapted features, which are not possible to be directly modeled in CNN. Next, various non-linearities with or without dropout are analyzed for speech tasks. Experiments indicate that proposed hybrid architecture with speaker-adapted features and maxout non-linearity with dropout idea shows 5.8% and 10% relative decrease in WER over the CNN and DNN systems, respectively.

48 citations


Journal ArticleDOI
TL;DR: This work has constructed the traditional MT model using Moses toolkit and has additionally enriched the language model using external data sets and ranked the phrase tables using an RNN encoder-decoder module created originally as a part of the GroundHog project of LISA lab.
Abstract: Abstract Machine translation (MT) is the automatic translation of the source language to its target language by a computer system. In the current paper, we propose an approach of using recurrent neural networks (RNNs) over traditional statistical MT (SMT). We compare the performance of the phrase table of SMT to the performance of the proposed RNN and in turn improve the quality of the MT output. This work has been done as a part of the shared task problem provided by the MTIL2017. We have constructed the traditional MT model using Moses toolkit and have additionally enriched the language model using external data sets. Thereafter, we have ranked the phrase tables using an RNN encoder-decoder module created originally as a part of the GroundHog project of LISA lab.

44 citations


Journal ArticleDOI
TL;DR: The proposed method, named BGSO, combines GA and PSO results by an algorithm called average weighted combination method to produce an intermediate solution and proves the applicability and usefulness of the method in the domain of FS.
Abstract: Abstract Feature selection (FS) is a technique which helps to find the most optimal feature subset to develop an efficient pattern recognition model under consideration. The use of genetic algorithm (GA) and particle swarm optimization (PSO) in the field of FS is profound. In this paper, we propose an insightful way to perform FS by amassing information from the candidate solutions produced by GA and PSO. Our aim is to combine the exploitation ability of GA with the exploration capacity of PSO. We name this new model as binary genetic swarm optimization (BGSO). The proposed method initially lets GA and PSO to run independently. To extract sufficient information from the feature subsets obtained by those, BGSO combines their results by an algorithm called average weighted combination method to produce an intermediate solution. Thereafter, a local search called sequential one-point flipping is applied to refine the intermediate solution further in order to generate the final solution. BGSO is applied on 20 popular UCI datasets. The results were obtained by two classifiers, namely, k nearest neighbors (KNN) and multi-layer perceptron (MLP). The overall results and comparisons show that the proposed method outperforms the constituent algorithms in 16 and 14 datasets using KNN and MLP, respectively, whereas among the constituent algorithms, GA is able to achieve the best classification accuracy for 2 and 7 datasets and PSO achieves best accuracy for 2 and 4 datasets, respectively, for the same set of classifiers. This proves the applicability and usefulness of the method in the domain of FS.

42 citations


Journal ArticleDOI
TL;DR: Four variations of the proposed hybrid algorithm provide a major performance enhancement in terms of best solutions and running time when compared to CS and SA as stand-alone algorithms, whereas the other variation provides a minor enhancement.
Abstract: Abstract Simulated annealing (SA) proved its success as a single-state optimization search algorithm for both discrete and continuous problems. On the contrary, cuckoo search (CS) is one of the well-known population-based search algorithms that could be used for optimizing some problems with continuous domains. This paper provides a hybrid algorithm using the CS and SA algorithms. The main goal behind our hybridization is to improve the solutions generated by CS using SA to explore the search space in an efficient manner. More precisely, we introduce four variations of the proposed hybrid algorithm. The proposed variations together with the original CS and SA algorithms were evaluated and compared using 10 well-known benchmark functions. The experimental results show that three variations of the proposed algorithm provide a major performance enhancement in terms of best solutions and running time when compared to CS and SA as stand-alone algorithms, whereas the other variation provides a minor enhancement. Moreover, the experimental results show that the proposed hybrid algorithms also outperform some well-known optimization algorithms.

32 citations


Journal ArticleDOI
TL;DR: The observational results have drawn that the proposed method has a superior performance compared to the previous steganography method in terms of quality by a high PSNR of 67.3638 dB and the lowest MSE of 0.2578.
Abstract: Abstract This paper aims to propose a method for data hiding in video by utilizing the least significant bit (LSB) method and improving it by utilizing the knight tour algorithm for concealing the data inside the AVI video file and using a key function encryption method for encrypting the secret message. First, the secret message is encrypted by utilizing a mathematical equation. The key used in the equation is a set of random numbers. These numbers differ in each implementation to warrant the safety of the hidden message and to increase the security of the secret message. Then, the cover video was converted from a set of frames into separated images to take the advantage of the large size of video file. Afterward, the knight tour algorithm is utilized for random selecting of the pixels inside the frame utilized for embedding the secret message inside it to overcome the shortcoming of the conventional LSB method that utilized the serial selection of pixel and to increase the robustness and security of the proposed method. Afterward, the encrypted secret message is embedded inside the selected pixels by utilizing the LSB method in bits (7 and 8). The observational results have drawn that the proposed method has a superior performance compared to the previous steganography method in terms of quality by a high PSNR of 67.3638 dB and the lowest MSE of 0.2578. Furthermore, this method preserves the security where the secret message cannot be drawn out without knowing the decoding rules.

31 citations


Journal ArticleDOI
TL;DR: A neural machine translation system for four language pairs, designed with long short-term memory (LSTM) networks and bi-directional recurrent neural networks (Bi-RNN) and able to perceive long-term contexts in the sentences.
Abstract: Abstract Introduction of deep neural networks to the machine translation research ameliorated conventional machine translation systems in multiple ways, specifically in terms of translation quality. The ability of deep neural networks to learn a sensible representation of words is one of the major reasons for this improvement. Despite machine translation using deep neural architecture is showing state-of-the-art results in translating European languages, we cannot directly apply these algorithms in Indian languages mainly because of two reasons: unavailability of the good corpus and Indian languages are morphologically rich. In this paper, we propose a neural machine translation (NMT) system for four language pairs: English–Malayalam, English–Hindi, English–Tamil, and English–Punjabi. We also collected sentences from different sources and cleaned them to make four parallel corpora for each of the language pairs, and then used them to model the translation system. The encoder network in the NMT architecture was designed with long short-term memory (LSTM) networks and bi-directional recurrent neural networks (Bi-RNN). Evaluation of the obtained models was performed both automatically and manually. For automatic evaluation, the bilingual evaluation understudy (BLEU) score was used, and for manual evaluation, three metrics such as adequacy, fluency, and overall ranking were used. Analysis of the results showed the presence of lengthy sentences in English–Malayalam, and the English–Hindi corpus affected the translation. Attention mechanism was employed with a view to addressing the problem of translating lengthy sentences (sentences contain more than 50 words), and the system was able to perceive long-term contexts in the sentences.

29 citations


Journal ArticleDOI
TL;DR: The model is evaluated by the common measurement index such as maximum absolute error, mean absolute error and root mean square error, and indicates that the predicted 4D trajectory is close to the real flight data, and the time error at the crossing point is no more than 1 min and the altitude error is no less than 50 m, which is of high accuracy.
Abstract: Abstract To solve the problem that traditional trajectory prediction methods cannot meet the requirements of high-precision, multi-dimensional and real-time prediction, a 4D trajectory prediction model based on the backpropagation (BP) neural network was studied. First, the hierarchical clustering algorithm and the k-means clustering algorithm were adopted to analyze the total flight time. Then, cubic spline interpolation was used to interpolate the flight position to extract the main trajectory feature. The 4D trajectory prediction model was based on the BP neural network. It was trained by Automatic Dependent Surveillance – Broadcast trajectory from Qingdao to Beijing and used to predict the flight trajectory at future moments. In this paper, the model is evaluated by the common measurement index such as maximum absolute error, mean absolute error and root mean square error. It also gives an analysis and comparison of the predicted over-point time, the predicted over-point altitude, the actual over-point time and the actual over-point altitude. The results indicate that the predicted 4D trajectory is close to the real flight data, and the time error at the crossing point is no more than 1 min and the altitude error at the crossing point is no more than 50 m, which is of high accuracy.

29 citations


Journal ArticleDOI
TL;DR: This is an initial work to perform Malayalam Twitter data POS tagging using deep learning sequential models and it was observed that the increase in hidden states improved the tagger model.
Abstract: Abstract The paper addresses the problem of part-of-speech (POS) tagging for Malayalam tweets. The conversational style of posts/tweets/text in social media data poses a challenge in using general POS tagset for tagging the text. For the current work, a tagset was designed that contains 17 coarse tags and 9915 tweets were tagged manually for experiment and evaluation. The tagged data were evaluated using sequential deep learning methods like recurrent neural network (RNN), gated recurrent units (GRU), long short-term memory (LSTM), and bidirectional LSTM (BLSTM). The training of the model was performed on the tagged tweets, at word level and character level. The experiments were evaluated using measures like precision, recall, f1-measure, and accuracy. During the experiment, it was found that the GRU-based deep learning sequential model at word level gave the highest f1-measure of 0.9254; at character-level, the BLSTM-based deep learning sequential model gave the highest f1-measure of 0.8739. To choose the suitable number of hidden states, we varied it as 4, 16, 32, and 64, and performed training for each. It was observed that the increase in hidden states improved the tagger model. This is an initial work to perform Malayalam Twitter data POS tagging using deep learning sequential models.

28 citations


Journal ArticleDOI
TL;DR: The performance and security parameters histogram, correlation distribution, correlation coefficient, entropy, number of pixel change rate, and unified averaged changed intensity are computed to show the potential of the proposed encryption technique.
Abstract: Abstract The paper presents an approach to encrypt the color images using bit-level permutation and alternate logistic map. The proposed method initially segregates the color image into red, green, and blue channels, transposes the segregated channels from the pixel-plane to bit-plane, and scrambles the bit-plane matrix using Arnold cat map (ACM). Finally, the red, blue, and green channels of the scrambled image are confused and diffused by applying alternate logistic map that uses a four-dimensional Lorenz system to generate a pseudorandom number sequence for the three channels. The parameters of ACM are generated with the help of Logistic-Sine map and Logistic-Tent map. The intensity values of scrambled pixels are altered by Tent-Sine map. One-dimensional and two-dimensional logistic maps are used for alternate logistic map implementation. The performance and security parameters histogram, correlation distribution, correlation coefficient, entropy, number of pixel change rate, and unified averaged changed intensity are computed to show the potential of the proposed encryption technique.

Journal ArticleDOI
TL;DR: The multi-reservoir systems optimization problem is tackled using β-hill climbing and a comparative evaluation is conducted to evaluate the proposed method against other methods found in the literature, showing the competitiveness of the proposed algorithm.
Abstract: The multi-reservoir systems optimization problem requires defining a set of rules to recognize the water amount stored and released in accordance with the system constraints. Traditional methods are not suitable for complex multi-reservoir systems with high dimensionality. Recently, metaheuristic-based algorithms such as evolutionary algorithms and local search-based algorithms are successfully used to solve the multi-reservoir systems. β-hill climbing is a recent metaheuristic local search-based algorithm. In this paper, the multi-reservoir systems optimization problem is tackled using β-hill climbing. In order to validate the proposed method, four-reservoir systems used in the literature to evaluate the algorithm are utilized. A comparative evaluation is conducted to evaluate the proposed method against other methods found in the literature. The obtained results show the competitiveness of the proposed algorithm.

Journal ArticleDOI
TL;DR: A simple and effective new algorithm for image encryption using a chaotic system which is based on the magic squares to generate a random key to encrypt any color image is introduced.
Abstract: Abstract This article introduces a simple and effective new algorithm for image encryption using a chaotic system which is based on the magic squares. This novel 3D chaotic system is invoked to generate a random key to encrypt any color image. A number of chaotic keys equal to the size of the image are generated by this chaotic system and arranged into a matrix then divided into non-overlapped submatrices. The image to be encrypted is also divided into sub-images, and each sub-image is multiplied by a magic matrix to produce another set of matrices. The XOR operation is then used on the resultant two sets of matrices to produce the encrypted image. The strength of the encryption method is tested in two folds. The first fold is the security analysis which includes key space analysis and sensitivity analysis. In the second fold, statistical analysis was performed, which includes the correlation coefficients, information entropy, the histogram, and analysis of differential attacks. Finally, the time of encryption and decryption was computed and show very good results.

Journal ArticleDOI
TL;DR: The various research issues and solutions that can be useful for the researchers to accomplish further research on glaucoma detection are presented.
Abstract: Abstract Glaucoma is one of the severe visual diseases that lead to damage the eyes irreversibly by affecting the optic nerve fibers and astrocytes. Consequently, the early detection of glaucoma plays a virtual role in the medical field. The literature presents various techniques for the early detection of glaucoma. Among the various techniques, retinal image-based detection plays a major role as it comes under noninvasive methods of detection. While detecting glaucoma disorder using retinal images, various medical features of the eyes, such as retinal nerve fiber layer, cup-to-disc ratio, apex point, optic disc, and optic nerve head, and image features, such as Haralick texture, higher-order spectra, and wavelet energy, are used. In this paper, a review and study were conducted for the different techniques of glaucoma detection using retinal fundus images. Accordingly, 45 research papers were reviewed and the analysis was provided based on the extracted features, classification accuracy, and the usage of different data sets, such as DIARETDB1 data set, MESSIDOR data set, IPN data set, ZEISS data set, local data set, and real data set. Finally, we present the various research issues and solutions that can be useful for the researchers to accomplish further research on glaucoma detection.

Journal ArticleDOI
TL;DR: A simple, low-cost human-machine interface system to help chaired people to control their wheelchair using several control sources and the results showed that the person’s thoughts can be used to seamlessly control his/her wheelchair.
Abstract: Recent research studies showed that brain-controlled systems/devices are breakthrough technology. Such devices can provide disabled people with the power to control the movement of the wheelchair using different signals (e.g. EEG signals, head movements, and facial expressions). With this technology, disabled people can remotely steer a wheelchair, a computer, or a tablet. This paper introduces a simple, low-cost human-machine interface system to help chaired people to control their wheelchair using several control sources. To achieve this paper’s aim, a laptop was installed on a wheelchair in front of the sitting person, and the 14-electrode Emotiv EPOC headset was used to collect the person’s head impressions from the skull surface. The superficially picked-up signals, containing the brain thoughts, head gestures, and facial emotions, were electrically encoded and then wirelessly sent to a personal computer to be interpreted and then translated into useful control instructions. Using these signals, two wheelchair control modes were proposed: automatic (using single-modal and multimodal approaches) and manual control. The automatic mode controller was accomplished using a software controller (Arduino), whereas a simple hardware controller was used for the manual mode. The proposed solution was designed using wheelchair, Emotiv EPOC EEG headset, Arduino microcontroller, and Processing language. It was then tested by totally chaired volunteers under different levels of trajectories. The results showed that the person’s thoughts can be used to seamlessly control his/her wheelchair and the proposed system can be configured to suit many levels and degrees of disability.

Journal ArticleDOI
TL;DR: An approach that classifies the sentiment polarity of Bengali tweets using deep neural networks which consist of one convolutional layer, one hidden layer and one output layer, which is a soft-max layer is presented.
Abstract: Abstract Sentiment polarity detection is one of the most popular sentiment analysis tasks. Sentiment polarity detection in tweets is a more difficult task than sentiment polarity detection in review documents, because tweets are relatively short and they contain limited contextual information. Although the amount of blog posts, tweets and comments in Indian languages is rapidly increasing on the web, research on sentiment analysis in Indian languages is at the early stage. In this paper, we present an approach that classifies the sentiment polarity of Bengali tweets using deep neural networks which consist of one convolutional layer, one hidden layer and one output layer, which is a soft-max layer. Our proposed approach has been tested on the Bengali tweet dataset released for Sentiment Analysis in Indian Languages contest 2015. We have compared the performance of our proposed convolutional neural networks (CNN)-based model with a sentiment polarity detection model that uses deep belief networks (DBN). Our experiments reveal that the performance of our proposed CNN-based system is better than our implemented DBN-based system and some existing Bengali sentiment polarity detection systems.

Journal ArticleDOI
TL;DR: This work has trained, tested, and analyzed NMT systems for English to Tamil, English to Hindi, and English to Punjabi translations, and evaluated their quality in terms of its adequacy, fluency, and correspondence with human-predicted translation.
Abstract: Abstract Machine Translation bridges communication barriers and eases interaction among people having different linguistic backgrounds. Machine Translation mechanisms exploit a range of techniques and linguistic resources for translation prediction. Neural machine translation (NMT), in particular, seeks optimality in translation through training of neural network, using a parallel corpus having a considerable number of instances in the form of a parallel running source and target sentences. Easy availability of parallel corpora for major Indian language forms and the ability of NMT systems to better analyze context and produce fluent translation make NMT a prominent choice for the translation of Indian languages. We have trained, tested, and analyzed NMT systems for English to Tamil, English to Hindi, and English to Punjabi translations. Predicted translations have been evaluated using Bilingual Evaluation Understudy and by human evaluators to assess the quality of translation in terms of its adequacy, fluency, and correspondence with human-predicted translation.

Journal ArticleDOI
Ritam Guha1, Manosij Ghosh1, Pawan Kumar Singh1, Ram Sarkar1, Mita Nasipuri1 
TL;DR: Comparison of the results obtained by the proposed model with existing HMOGA and MOGA techniques clearly indicates the superiority of M-HMOGA over both of its ancestors.
Abstract: Abstract The feature selection process is very important in the field of pattern recognition, which selects the informative features so as to reduce the curse of dimensionality, thus improving the overall classification accuracy. In this paper, a new feature selection approach named Memory-Based Histogram-Oriented Multi-objective Genetic Algorithm (M-HMOGA) is introduced to identify the informative feature subset to be used for a pattern classification problem. The proposed M-HMOGA approach is applied to two recently used feature sets, namely Mojette transform and Regional Weighted Run Length features. The experimentations are carried out on Bangla, Devanagari, and Roman numeral datasets, which are the three most popular scripts used in the Indian subcontinent. In-house Bangla and Devanagari script datasets and Competition on Handwritten Digit Recognition (HDRC) 2013 Roman numeral dataset are used for evaluating our model. Moreover, as proof of robustness, we have applied an innovative approach of using different datasets for training and testing. We have used in-house Bangla and Devanagari script datasets for training the model, and the trained model is then tested on Indian Statistical Institute numeral datasets. For Roman numerals, we have used the HDRC 2013 dataset for training and the Modified National Institute of Standards and Technology dataset for testing. Comparison of the results obtained by the proposed model with existing HMOGA and MOGA techniques clearly indicates the superiority of M-HMOGA over both of its ancestors. Moreover, use of K-nearest neighbor as well as multi-layer perceptron as classifiers speaks for the classifier-independent nature of M-HMOGA. The proposed M-HMOGA model uses only about 45–50% of the total feature set in order to achieve around 1% increase when the same datasets are partitioned for training-testing and a 2–3% increase in the classification ability while using only 35–45% features when different datasets are used for training-testing with respect to the situation when all the features are used for classification.

Journal ArticleDOI
TL;DR: Four research issues, such as hiding failure rate, information preservation rate, and false rule generation, and degree of modification are minimized using the adopted sanitization and restoration processes.
Abstract: Abstract Privacy-preserving data mining (PPDM) is a novel approach that has emerged in the market to take care of privacy issues. The intention of PPDM is to build up data-mining techniques without raising the risk of mishandling of the data exploited to generate those schemes. The conventional works include numerous techniques, most of which employ some form of transformation on the original data to guarantee privacy preservation. However, these schemes are quite multifaceted and memory intensive, thus leading to restricted exploitation of these methods. Hence, this paper intends to develop a novel PPDM technique, which involves two phases, namely, data sanitization and data restoration. Initially, the association rules are extracted from the database before proceeding with the two phases. In both the sanitization and restoration processes, key extraction plays a major role, which is selected optimally using Opposition Intensity-based Cuckoo Search Algorithm, which is the modified format of Cuckoo Search Algorithm. Here, four research issues, such as hiding failure rate, information preservation rate, and false rule generation, and degree of modification are minimized using the adopted sanitization and restoration processes.

Journal ArticleDOI
TL;DR: This paper suggests a novel IDS established on a combination of a leader-based k-means clustering (LKM), optimal fuzzy logic system, and the obtained results have denoted the superiority of the suggested method in comparison with other methods.
Abstract: Abstract In cloud security, intrusion detection system (IDS) is one of the challenging research areas. In a cloud environment, security incidents such as denial of service, scanning, malware code injection, virus, worm, and password cracking are getting usual. These attacks surely affect the company and may develop a financial loss if not distinguished in time. Therefore, securing the cloud from these types of attack is very much needed. To discover the problem, this paper suggests a novel IDS established on a combination of a leader-based k-means clustering (LKM), optimal fuzzy logic system. Here, at first, the input dataset is grouped into clusters with the use of LKM. Then, cluster data are afforded to the fuzzy logic system (FLS). Here, normal and abnormal data are inquired by the FLS, while FLS training is done by the grey wolf optimization algorithm through maximizing the rules. The clouds simulator and NSL-Knowledge Discovery and DataBase (KDD) Cup 99 dataset are applied to inquire about the suggested method. Precision, recall, and F-measure are conceived as evaluation criteria. The obtained results have denoted the superiority of the suggested method in comparison with other methods.

Journal ArticleDOI
TL;DR: An approach to determine the sentiments of tweets in one of the Indian languages (Hindi, Bengali, and Tamil) using nine sequential models using three different neural network layers with optimum parameter settings to avoid over-fitting and error accumulation is proposed.
Abstract: Abstract Sentiment analysis refers to determining the polarity of the opinions represented by text. The paper proposes an approach to determine the sentiments of tweets in one of the Indian languages (Hindi, Bengali, and Tamil). Thirty-nine sequential models have been created using three different neural network layers [recurrent neural networks (RNNs), long short-term memory (LSTM), convolutional neural network (CNN)] with optimum parameter settings (to avoid over-fitting and error accumulation). These sequential models have been investigated for each of the three languages. The proposed sequential models are experimented to identify how the hidden layers affect the overall performance of the approach. A comparison has also been performed with existing approaches to find out if neural networks have an added advantage over traditional machine learning techniques.

Journal ArticleDOI
TL;DR: This article addresses language identification at the word level in Indian social media corpora taken from Facebook, Twitter and WhatsApp posts that exhibit code-mixing between English-Hindi, English-Bengali, as well as a blend of both language pairs.
Abstract: Abstract This article addresses language identification at the word level in Indian social media corpora taken from Facebook, Twitter and WhatsApp posts that exhibit code-mixing between English-Hindi, English-Bengali, as well as a blend of both language pairs. Code-mixing is a fusion of multiple languages previously mainly associated with spoken language, but which social media users also deploy when communicating in ways that tend to be rather casual. The coarse nature of code-mixed social media text makes language identification challenging. Here, the performance of deep learning on this task is compared to feature-based learning, with two Recursive Neural Network techniques, Long Short Term Memory (LSTM) and bidirectional LSTM, being contrasted to a Conditional Random Fields (CRF) classifier. The results show the deep learners outscoring the CRF, with the bidirectional LSTM demonstrating the best language identification performance.

Journal ArticleDOI
TL;DR: A hybrid approach based on particle swarm optimization and twin support vector regression for forecasting wind speed (PSO-TSVR) and the computational results proved that the proposed approach achieved better forecasting accuracy and outperformed the comparison algorithms.
Abstract: Abstract Wind energy is considered one of the renewable energy sources that minimize the cost of electricity production. This article proposes a hybrid approach based on particle swarm optimization (PSO) and twin support vector regression (TSVR) for forecasting wind speed (PSO-TSVR). To enhance the forecasting accuracy, TSVR was utilized to forecast the wind speed, and the optimal settings of TSVR parameters were optimized by PSO carefully. Moreover, to estimate the performance of the suggested approach, three wind speed benchmark data of OpenEI were used as a case study. The experimental results revealed that the optimized PSO-TSVR approach is able to forecast wind speed with an accuracy of 98.9%. Further, the PSO-TSVR approach has been compared with two well-known algorithms such as genetic algorithm with TSVR and the native TSVR using radial basis kernel function. The computational results proved that the proposed approach achieved better forecasting accuracy and outperformed the comparison algorithms.

Journal ArticleDOI
TL;DR: A novel algorithm is designed to identify the start points within the liver section automatically and the fast marching method is applied at start points that grow outwardly to detect the accurate liver boundary.
Abstract: Abstract Liver segmentation from abdominal computed tomography (CT) scan images is a complicated and challenging task. Due to the haziness in the liver pixel range, the neighboring organs of the liver have the same intensity level and existence of noise. Segmentation is necessary in the detection, identification, analysis, and measurement of objects in CT scan images. A novel approach is proposed to meet the challenges in extracting liver images from abdominal CT scan images. The proposed approach consists of three phases: (1) preprocessing, (2) CT scan image transformation to neutrosophic set, and (3) postprocessing. In preprocessing, noise in the CT scan is reduced by median filter. A “new structure” is introduced to transform a CT scan image into a neutrosophic domain, which is expressed using three membership subsets: true subset (T), false subset (F), and indeterminacy subset (I). This transform approximately extracts the liver structure. In the postprocessing phase, morphological operation is performed on the indeterminacy subset (I). A novel algorithm is designed to identify the start points within the liver section automatically. The fast marching method is applied at start points that grow outwardly to detect the accurate liver boundary. The evaluation of the proposed segmentation algorithm is concluded using area- and distance-based metrics.

Journal ArticleDOI
TL;DR: An improved correlation coefficient of the intuitionistic fuzzy sets is defined, and it can overcome some drawbacks of the existing ones, and the properties of this correlation coefficient are discussed.
Abstract: Abstract The intuitionistic fuzzy set is a useful tool to deal with vagueness and uncertainty. Correlation coefficient of the intuitionistic fuzzy sets is an important measure in intuitionistic fuzzy set theory and has great practical potential in a variety of areas, such as decision making, medical diagnosis, pattern recognition, etc. In this paper, an improved correlation coefficient of the intuitionistic fuzzy sets is defined, and it can overcome some drawbacks of the existing ones. The properties of this correlation coefficient are discussed. Then, the generalization of the coefficient of interval-valued intuitionistic fuzzy sets is also introduced. Finally, two examples about the application of the proposed correlation coefficient of the intuitionistic fuzzy sets in medical diagnosis and clustering are shown to illustrate the advantages over the existing methods.

Journal ArticleDOI
TL;DR: This study explores the non-linear relation between different biomarkers (SPECT + biological) using deep learning and multivariate logistic regression and results indicate that this investigated approach can differentiate subjects with 100% accuracy.
Abstract: Abstract Precise and timely diagnosis of Parkinson’s disease is important to control its progression among subjects. Currently, a neuroimaging technique called dopaminergic imaging that uses single photon emission computed tomography (SPECT) with 123I-Ioflupane is popular among clinicians for detecting Parkinson’s disease in early stages. Unlike other studies, which consider only low-level features like gray matter, white matter, or cerebrospinal fluid, this study explores the non-linear relation between different biomarkers (SPECT + biological) using deep learning and multivariate logistic regression. Striatal binding ratios are obtained using 123I-Ioflupane SPECT scans from four brain regions which are further integrated with five biological biomarkers to increase the diagnostic accuracy. Experimental results indicate that this investigated approach can differentiate subjects with 100% accuracy. The obtained results outperform the ones reported in the literature. Furthermore, logistic regression model has been developed for estimating the Parkinson’s disease onset probability. Such models may aid clinicians in diagnosing this disease.

Journal ArticleDOI
TL;DR: The trapezoidal linguistic cubic fuzzy TOPSIS method is defined and used to solve the multi criteria decision making (MCDM) method and the new ranking method for trapezoids cubic fuzzy numbers (TrLCFNs) is used to rank the alternatives.
Abstract: Abstract The aim of this paper is to define some new operation laws for the trapezoidal linguistic cubic fuzzy number and Hamming distance. Furthermore, we define and use the trapezoidal linguistic cubic fuzzy TOPSIS method to solve the multi criteria decision making (MCDM) method. The new ranking method for trapezoidal linguistic cubic fuzzy numbers (TrLCFNs) are used to rank the alternatives. Finally, an illustrative example is given to verify and prove the practicality and effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: To forecast the air quality index (AQI), artificial neural networks trained with conjugate gradient descent (CGD) along with regression models such as multiple linear regression (MLR) and support vector regression (SVR) are implemented.
Abstract: Abstract Air is the most essential constituent for the sustenance of life on earth. The air we inhale has a tremendous impact on our health and well-being. Hence, it is always advisable to monitor the quality of air in our environment. To forecast the air quality index (AQI), artificial neural networks (ANNs) trained with conjugate gradient descent (CGD), such as multilayer perceptron (MLP), cascade forward neural network, Elman neural network, radial basis function (RBF) neural network, and nonlinear autoregressive model with exogenous input (NARX) along with regression models such as multiple linear regression (MLR) consisting of batch gradient descent (BGD), stochastic gradient descent (SGD), mini-BGD (MBGD) and CGD algorithms, and support vector regression (SVR), are implemented. In these models, the AQI is the dependent variable and the concentrations of NO2, CO, O3, PM2.5, SO2, and PM10 for the years 2010–2016 in Houston and Los Angeles are the independent variables. For the final forecast, several ensemble models of individual neural network predictors and individual regression predictors are presented. This proposed approach performs with the highest efficiency in terms of forecasting air quality index.

Journal ArticleDOI
TL;DR: The robust and lossless patient medical information sharing system using crypto-watermarking method is proposed, which securely share three types of patient information, medical image, electronic health record (EHR), and face image from one hospital to another hospital.
Abstract: Abstract A reliable medical image management must provide proper security for patient information. Protecting the medical information of the patients is a major concern in all hospitals. Digital watermarking is a procedure prevalently used to secure the confidentiality of medical information and maintain them, which upgrades patient health awareness. To protect the medical information, the robust and lossless patient medical information sharing system using crypto-watermarking method is proposed. The proposed system consists of two phases: (i) embedding and (ii) extraction. In this paper, we securely share three types of patient information, medical image, electronic health record (EHR), and face image from one hospital to another hospital. Initially, all the three inputs are encrypted and the information is concordant. In order to enhance the robustness of the crypto-watermarking system, the obtained bit stream is compressed, and the compressed bit streams are embedded into the cover image. The same process is repeated for the extraction process. The experimentation result is carried out using different medical images with EHR, and the effectiveness of the proposed algorithm is analyzed with the help of peak signal to noise ratio.

Journal ArticleDOI
TL;DR: The proposed work implements and optimizes the performance of a currently proposed chaos-deoxyribonucleic acid (DNA)-based hybrid approach to encrypt images using a bi-objective genetic algorithm (GA) optimization.
Abstract: Abstract The paper implements and optimizes the performance of a currently proposed chaos-deoxyribonucleic acid (DNA)-based hybrid approach to encrypt images using a bi-objective genetic algorithm (GA) optimization. Image encryption is a multi-objective problem. Optimizing the same using one fitness function may not be a good choice, as it can result in different outcomes concerning other fitness functions. The proposed work initially encrypts the given image using chaotic function and DNA masks. Further, GA uses two fitness functions – entropy with correlation coefficient (CC), entropy with unified average changing intensity (UACI), and entropy with number of pixel change rate (NPCR) – simultaneously to optimize the encrypted data in the second stage. The bi-objective optimization using entropy with CC shows significant performance gain over the single-objective GA optimization for image encryption.