scispace - formally typeset
Search or ask a question

Showing papers by "K. P. Soman published in 2021"


Journal ArticleDOI
TL;DR: Better results in comparison with results in recent literature demonstrate that the EEG spectrum image generation using VMD-STFT is a promising method for the time frequency analysis of EEG signals.
Abstract: A novel approach of preprocessing EEG signals by generating spectrum image for effective Convolutional Neural Network (CNN) based classification for Motor Imaginary (MI) recognition is proposed. The approach involves extracting the Variational Mode Decomposition (VMD) modes of EEG signals, from which the Short Time Fourier Transform (STFT) of all the modes are arranged to form EEG spectrum images. The EEG spectrum images generated are provided as input image to CNN. The two generic CNN architectures for MI classification (EEGNet and DeepConvNet) and the architectures for pattern recognition (AlexNet and LeNet) are used in this study. Among the four architectures, EEGNet provides average accuracies of 91.37%, 94.41%, 85.67% and 90.21% for the four datasets used to validate the proposed approach. Consistently better results in comparison with results in recent literature demonstrate that the EEG spectrum image generation using VMD-STFT is a promising method for the time frequency analysis of EEG signals.

27 citations


Journal ArticleDOI
12 Aug 2021
TL;DR: Morphological synthesis is one of the main components of Machine Translation (MT) frameworks, especially when any one or both of the source and target languages are morphologically rich.
Abstract: Morphological synthesis is one of the main components of Machine Translation (MT) frameworks, especially when any one or both of the source and target languages are morphologically rich. Morphologi...

6 citations


Book ChapterDOI
01 Jan 2021
TL;DR: This study aims to develop a sample knowledge-based method of MT system, which is able to handle the linguistic specificities of the source (SL) and target (TL) languages.
Abstract: Functional and content words are highly expressive The functional words like adpositions and auxiliary verbs, modal auxiliary verbs, and helping verbs are polysemic and synonymic in nature They play a vital role in determining the syntactic and semantics of natural languages In the context of machine translation (MT), the issues related to them still remain a major problem This study aims to develop a sample knowledge-based method of MT system, which is able to handle the linguistic specificities of the source (SL) and target (TL) languages

5 citations


Book ChapterDOI
01 Jan 2021
TL;DR: In this paper, the authors discuss the role of deep learning in diabetes detection and management and discuss different deep learning algorithms used for non-invasive computer-aided diabetes diagnosis, making use of HRV input.
Abstract: Diabetes is a chronic condition much prevalent globally. It is an incurable condition, which has to be managed well. Otherwise, it leads to serious complications like heart diseases, stroke, kidney failure, diabetic retinopathy etc. Hyperglycaemia associated with diabetes leads to cardiovascular malfunctioning after the exclusion of other causes like high blood pressure, high cholesterol and obesity. Hence, timely detection and efficient control of diabetes is of extreme significance. We discuss here the role of deep learning in diabetes detection and management. Diabetic neuropathy results in reduction of heart rate variability (HRV). Reduced HRV is thus a marker of diabetic neuropathy. The parameter of HRV is derived from electrocardiography (ECG) signals. ECG is extracted non-invasively from a person. The anomalies in different parts of ECG waveform (like P-wave, ST segment etc.) clearly indicate diseases affecting the heart. These parameters, other than heart rate variations, are dependent on the shape and size of the ECG waveform, and it is extremely difficult to analyse these ECG morphological changes. Hence, many artificial intelligence-based works for diabetes detection have selected HRV as a parameter to assess cardiac autonomic dysfunctions among several available ECG-derived features. Presently deep learning techniques have been increasingly deployed in biomedical signal analysis. As the intricacies and enormity connected with data rise to high levels, a lot of thought and research has to be performed just for deciding which features need to be found out so as to suit what final diagnosis information has to be understood from the data. Deep learning is completely devoid of this manual feature deciding processes unlike machine learning. Deep learning network units do feature extraction and transformation in an implicit manner. Deep learning method is the latest technology which is capable of digging out even the minute information regarding the subtle changes in HRV signals due to diabetes providing an extremely high accuracy of detection. Hybrid deep learning network (cascade of two deep learning architectures) and the deployment of very large-sized input datasets for training these networks can further better the efficacy of abnormality detection. All the different deep learning algorithms used for non-invasive computer-aided diabetes diagnosis, making use of HRV input, are briefly discussed in this book chapter.

5 citations


Book ChapterDOI
01 Jan 2021
TL;DR: In this article, an attention-based CNN-Bi-LSTM model was used for feature generation from Hindi-English code-mixed texts to classify them to various sentiments like positive, neutral and negative using deep learning techniques.
Abstract: Social media has been experiencing an enormous amount of activity from millions of people across the globe over last few years. This resulted in the accumulation of substantial amount of textual data and increased several opportunities of analysis. Sentiment analysis and classification is one such task where the opinion expressed in the text is identified and classified accordingly. This becomes even more trickier in code-mixed text due to free style of writing which does not have a proper syntactic structure. In this paper, we worked on such Hind–English code-mixed texts obtained from SentiMix shared task of SemEval-2020. We created a novel customized embedding model for feature generation from Hindi–English code-mixed texts to classify them to various sentiments like positive, neutral and negative using deep learning techniques. It is observed that attention-based CNN-Bi-LSTM model has achieved better performance out of all models with 70.32% F1-score.

5 citations


Book ChapterDOI
01 Jan 2021
TL;DR: In this article, an aspect-based sentiment analysis (ABSA) model was proposed to identify fine-grained opinion polarity towards a specific aspect associated with a given target.
Abstract: With the evolving digital era, the amount of online data generated such as product reviews in different languages via various social media platforms. Information analysis is very beneficial for many companies such as online service providers. This task of interpreting and classifying the emotions behind the text (review) using text analysing techniques is known as sentiment analysis (SA). Sometimes, the sentence might have positive as well as negative polarity at the same time, giving rise to conflict situations where the SA models might not be able to predict the polarity precisely. This problem can be solved using aspect-based sentiment analysis (ABSA) that identifies fine-grained opinion polarity towards a specific aspect associated with a given target. The aspect category helps us to understand the sentiment analysis problem better. ABSA on the Hindi benchmark dataset, having reviews from multiple web sources, is performed in this work. The proposed model has used two different word embedding algorithms, namely Word2Vec and fastText for feature generation and various machine learning (ML) and deep learning (DL) models for classification. For the ABSA task, the LSTM model outperformed other ML and DL models with 57.93 and 52.32% accuracy, using features from Word2Vec and fastText, respectively. Mostly, the performance of classification models with Word2Vec embedding was better than the models with fastText embedding.

5 citations


Journal ArticleDOI
TL;DR: Experimental results reveals that incorporation of saliency features can represent human perception well and produces good retrieval performance.
Abstract: Considering the gap between low-level image features and high-level retrieval concept, this paper investigates the effect of incorporating visual saliency based features for content-based image retrieval(CBIR).Visual saliency plays an important role in human perception due to its capability to focus the attention on the point of interest, i.e. an intended target. This selection based processing can be well explored in localized CBIR systems, since in context of CBIR the users will be interested only in certain parts of the image. The proposed methodology uses Dynamic Mode Decomposition framework to extract the saliency map which highlights the part of the image that grabs human attention. Then, based on the saliency map, an efficient salient edge detection model is introduced. Visual saliency based features (salient region, edge) are then combined with texture and color features to formulate the high dimensional feature vector for image retrieval. State-of-the-art learning based CBIR models demands for user feed back to model the retrieval concept. In contrast with these models, proposed CBIR system does not require any user interaction, since it uses perceptual level features for the retrieval task. Performance of the proposed CBIR system is evaluated and confirmed on images from Wang’s dataset using benchmark evaluation metrics like precision and recall. Experimental results reveals that incorporation of saliency features can represent human perception well and produces good retrieval performance.

5 citations


Book ChapterDOI
01 Jan 2021
TL;DR: DMD has recently gained improved interest due to its dominant ability to mine meaningful information from available measurements, and has revolutionized the analysis and modeling of physical systems like fluid dynamics, neuroscience, financial trading markets, multimedia, smart grid, etc.
Abstract: The unprecedented availability of high-fidelity data measurements in various disciplines of engineering and physical and medical sciences reinforces the development of more sophisticated algorithms for data processing and analysis More advanced algorithms are required to extract the spatiotemporal features concealed in the data that represent the system dynamics Usage of advanced data-driven algorithms paves the way to understand the associated dominant dynamical behavior and, thus, improves the capacity for various tasks, such as forecasting, control, and modal analysis One such emerging method for data-driven analysis is dynamic mode decomposition (DMD) The algorithm for DMD is introduced by Peter J Schmid in 2010 based on the foundation of Koopman operator (Schmid J Fluid Mech 656:5–28, 2010) It is basically a decomposition algorithm with intelligence to identify the spatial patterns and temporal features of the data measurements DMD has recently gained improved interest due to its dominant ability to mine meaningful information from available measurements It has revolutionized the analysis and modeling of physical systems like fluid dynamics, neuroscience, financial trading markets, multimedia, smart grid, etc The ability to recognize the spatiotemporal patterns makes DMD as prominent among other similar algorithms DMD algorithm merges the characteristics of proper orthogonal decomposition (POD) and Fourier transform

2 citations


Book ChapterDOI
01 Jan 2021
TL;DR: In this paper, the effect of dimensionality reduction for open set domain adaptation for hyperspectral image classification by using dynamic mode decomposition (DMD) technique has been analyzed.
Abstract: Hyperspectral image classification has so many applications in the area of remote sensing. In recent years, deep learning has been accepted as a powerful tool for feature extraction and ensuring better classification accuracies. In this paper, model for HSI classification is created by implementing open set domain adaptation and generative adversarial networks (GAN). Open set domain adaptation is a type of domain adaptation where target has more classes which are not present in the source distribution. Huge dimension of hyperspectral image needs to be reduced for an efficient classification. In this work, we analysed the effect of dimensionality reduction for open set domain adaptation for hyperspectral image classification by using dynamic mode decomposition (DMD) technique. Experimental results show that 20% of the total available bands of Salinas and 30% of the bands of PaviaU dataset are the highest achievable reduction in feature dimension that results in almost same classification accuracy.

2 citations


Book ChapterDOI
01 Jan 2021
TL;DR: In this article, the effect of different frequency ranges (octaves) on the performance of wavelet decomposition, variational decomposition and dynamic mode decomposition was evaluated on seven different octaves ranging from 1 to 7.
Abstract: Wavelet decomposition, variational mode decomposition, and dynamic mode decomposition are the latest signal processing tools that are recently being utilized in the music domain. Most of the work on these algorithms in music domain shows results based on pitch contour. None of the work mentions the effect of different frequency ranges (octaves) on these algorithms. In this paper, the evaluation is performed to understand the effect of different octaves on these decomposition algorithms based on pitch estimation of piano notes. Wavelet decomposition, variational mode decomposition, and dynamic mode decomposition methods are evaluated based on pitch estimation for different octaves. The purpose of this evaluation is to identify the most suitable method for pitch estimation. A comparative evaluation is performed on piano recordings taken from the database of the electronic music studio, University of Iowa. Absolute mean logarithmic error is used as the metric for evaluating the algorithms. The evaluation of algorithms is performed on seven different octaves ranging from 1 to 7. Variational mode decomposition performed better throughout the 7 octaves. Wavelet decomposition also performed well but was less accurate than variational mode decomposition. Dynamic mode decomposition was the least accurate among all the methods.

2 citations


Book ChapterDOI
01 Jan 2021
TL;DR: This chapter explores all sorts of ambiguity focusing mainly on ambiguity due to polysemy in verbs and will also explore resolving polyse my in Malayalam verbs using context similarity.
Abstract: Generally, verbs are polysemous in any language as their number is lesser than other categories including nouns Mostly, the meaning of a verb is decided by the words with which it collocates The contextual dependency of the verbs reduces the number of verbs in most of the languages or all languages So by default, verbs become highly polysemous In textual context or spoken context, the polysemy will not create problems as one can infer the correct meaning of a verb by the context of its occurrence But in computational context, polysemy will be a problem As machine does not have the knowledge which human brain has, it must be given knowledge by some means to interpret the meaning of a verb correctly Polysemy is a problem in the interpretation of Malayalam verbs too Resolving polysemy in Malayalam verb is needed for any NLP activity in Malayalam including machine translation In machine translation, ambiguity due to polysemy is a crucial problem This chapter explores all sorts of ambiguity focusing mainly on ambiguity due to polysemy in verbs It will also explore resolving polysemy in Malayalam verbs using context similarity

Posted Content
TL;DR: In this article, with the help of fully convolutional geometric features, the authors were able to extract and learn the high level semantic features from CAD models with inductive transfer learning.
Abstract: Manufacturing industries have widely adopted the reuse of machine parts as a method to reduce costs and as a sustainable manufacturing practice. Identification of reusable features from the design of the parts and finding their similar features from the database is an important part of this process. In this project, with the help of fully convolutional geometric features, we are able to extract and learn the high level semantic features from CAD models with inductive transfer learning. The extracted features are then compared with that of other CAD models from the database using Frobenius norm and identical features are retrieved. Later we passed the extracted features to a deep convolutional neural network with a spatial pyramid pooling layer and the performance of the feature retrieval increased significantly. It was evident from the results that the model could effectively capture the geometrical elements from machining features.