scispace - formally typeset
Search or ask a question

Showing papers by "Goutam Saha published in 2021"


Journal ArticleDOI
TL;DR: This article experiments with deep learning methodologies in echocardiogram (echo), a promising and vigorously researched technique in the preponderance field and found that deep-learning methodologies perform better than SVM methodology in normal or abnormal classification.
Abstract: This article experiments with deep learning methodologies in echocardiogram (echo), a promising and vigorously researched technique in the preponderance field. This paper involves two different kinds of classification in the echo. Firstly, classification into normal (absence of abnormalities) or abnormal (presence of abnormalities) has been done, using 2D echo images, 3D Doppler images, and videographic images. Secondly, based on different types of regurgitation, namely, Mitral Regurgitation (MR), Aortic Regurgitation (AR), Tricuspid Regurgitation (TR), and a combination of the three types of regurgitation are classified using videographic echo images. Two deep-learning methodologies are used for these purposes, a Recurrent Neural Network (RNN) based methodology (Long Short Term Memory (LSTM)) and an Autoencoder based methodology (Variational AutoEncoder (VAE)). The use of videographic images distinguished this work from the existing work using SVM (Support Vector Machine) and also application of deep-learning methodologies is the first of many in this particular field. It was found that deep-learning methodologies perform better than SVM methodology in normal or abnormal classification. Overall, VAE performs better in 2D and 3D Doppler images (static images) while LSTM performs better in the case of videographic images.

20 citations


Journal ArticleDOI
TL;DR: A novel Self-weighted Multi-view Multiple Kernel Learning (SMVMKL) framework is proposed using multiple kernels on multiple views that automatically assigns appropriate weight to each kernel of each view without introducing an additional parameter.

11 citations


Book ChapterDOI
01 Jan 2021
TL;DR: A lightweight hash-based Blockchain (LightBC) is proposed for the IoT, which adapts the SPONGENT hash function, which has been emulated and compared with SHA-256 based Blockchain on a Blockchain emulator and satisfactory results were found for upto 8000 nodes.
Abstract: Blockchain technology is one of the key technologies that have the potential to solve many of the Internet of Things (IoT) challenges. The IoT environment consists of numerous resource-constrained devices. The security and privacy of these devices have become primary concerns among consumers and businesses. Although, Blockchain could provide better security and privacy to these devices, their limited memory, battery life, and processing capabilities make Blockchain-IoT integration very challenging. Current implementations of Blockchain use cryptographic schemes like SHA-256 and ECDSA. However, the resource-constrained nature of IoT devices demands lightweight versions of the Blockchain for IoT. In this work, a lightweight hash-based Blockchain (LightBC) is proposed for the IoT, which adapts the SPONGENT hash function. It has been emulated and compared with SHA-256 based Blockchain on a Blockchain emulator and satisfactory results were found for upto 8000 nodes. The IoT architecture has also been proposed for implementing the same.

5 citations


Journal ArticleDOI
TL;DR: In this article, a hybrid CNN-U-Net is used for SAR image segmentation, where pre-defined filters are first applied to the images and then fed to the hybrid CNN that is resulted from the concept of Inception and U-Net.

5 citations


Book ChapterDOI
TL;DR: A new SDN-based 6LoWPAN-IoT infrastructure, namely SD-6LN, has been developed, which has enhanced the availability, reliability, and scalability of resource-constraint networks.
Abstract: The state-of-the-art smart communication system like the Internet of things (IoT) suffers from various limitations like availability, reliability, scalability, interoperability, security, and privacy. Software-defined network (SDN) is an approach that has got many advantages that can solve some of the IoT challenges. IoT and SDN are two categories of networking system, but if they can be merged, then many IoT challenges will be resolved. Practically, it is not possible to discard the existing IoT infrastructure to replace it with any new system. In this paper, the initiative has been undertaken to incorporate the SDN feature in the existing 6LoWPAN-based IoT infrastructure. A new SDN-based 6LoWPAN-IoT infrastructure, namely SD-6LN, has been developed. This has enhanced the availability, reliability, and scalability of resource-constraint networks. The experimental results indicated satisfactory results with respect to round trip time, jitter, and packet drop in this network.

5 citations


Book ChapterDOI
01 Jan 2021
TL;DR: In this paper, a gated variant of recurrent neural network known as gated recurrent unit (GRU) was used to forecast runoff values, and the performance of GRU is then compared with another famous gated version of RNN known as long short-term memory (LSTM).
Abstract: Runoff estimation has been an active area of research in the field of hydrology. It is often considered to be one of the most complicated processes owing to its spatio-temporal distribution, inadequate data and uncertainties of the contributing factors. In recent years, soft computing techniques have evolved to be effective in modeling such complex phenomena. Particularly with the success of artificial neural networks and its variants such as recurrent neural network, computational modeling techniques have achieved satisfactory acceptance level. In this paper, we present a sophisticated gated variant of recurrent neural network known as gated recurrent unit (GRU) to forecast runoff values. The performance of GRU is then compared with another famous gated variant of RNN known as long short-term memory (LSTM). RMSE value and model training time are used as a performance evaluation criteria for the selected models. Experimental result suggests that GRU’s result is at par with its compatriot LSTM model, in fact it is better in some cases and that too in significantly reduced computational time.

4 citations


Posted Content
TL;DR: In this paper, a cross-corpora performance evaluation for spoken language identification (LID) was conducted for the Indian languages, where three Indian spoken language corpora were selected: IIITH-ILSC, LDC South Asian, and IITKGP-MLILSC.
Abstract: In this paper, we conduct one of the very first studies for cross-corpora performance evaluation in the spoken language identification (LID) problem. Cross-corpora evaluation was not explored much in LID research, especially for the Indian languages. We have selected three Indian spoken language corpora: IIITH-ILSC, LDC South Asian, and IITKGP-MLILSC. For each of the corpus, LID systems are trained on the state-of-the-art time-delay neural network (TDNN) based architecture with MFCC features. We observe that the LID performance degrades drastically for cross-corpora evaluation. For example, the system trained on the IIITH-ILSC corpus shows an average EER of 11.80 % and 43.34 % when evaluated with the same corpora and LDC South Asian corpora, respectively. Our preliminary analysis shows the significant differences among these corpora in terms of mismatch in the long-term average spectrum (LTAS) and signal-to-noise ratio (SNR). Subsequently, we apply different feature level compensation methods to reduce the cross-corpora acoustic mismatch. Our results indicate that these feature normalization schemes can help to achieve promising LID performance on cross-corpora experiments.

3 citations


Book ChapterDOI
01 Jan 2021
TL;DR: The design of a blockchain-based e-voting system, which will ensure the total security and authenticity of the voting system, and initial experimentation displayed satisfactory prospects of the system.
Abstract: A vote is considered as one of the most important legal rights a citizen can practice in a democratic country. Presently, votes are cast in two manners: manual mode, i.e., paper voting and the semi-manual mode using e-voting machine (EVM). Both systems require the accumulation of the voters in centralized voting booths. Several discrepancies arise due to the lack of security management in a centralized voting system. These can be avoided if a decentralized voting system can be implemented thereby providing a high-security impact. Blockchain is a technology that acts as a backbone for the highly secured parallel economy like the Bitcoin system. If this technology can be implemented as a security measure in a decentralized e-voting system, this will ensure the total security and authenticity of the voting system. In this paper, an endeavor has been made to design a blockchain-based e-voting system. Initial experimentation displayed satisfactory prospects of the system.

3 citations



Book ChapterDOI
12 Jun 2021
TL;DR: In this paper, a technique was proposed to design smaller S-boxes that can be used in lightweight block ciphers, hash functions, etc. The design technique used in the AES S-box was adopted and simplified in order to make these smaller Sboxes.
Abstract: Emerging areas like the IoT, etc., have computing environments that consist of numerous resource-constraint devices that are interconnected and communicated to each other. These devices need to operate in a secured environment; however, conventional cryptography is not suitable as they have low computational and memory resources. Security for such devices can be ensured by using lightweight cryptography instead. In this paper, a technique was proposed to design smaller S-boxes that can be used in lightweight block ciphers, hash functions, etc. The design technique used in the AES S-box was adopted and simplified in order to make these smaller S-boxes. The proposed S-boxes were compared with those used in the PRESENT cipher and the LUFFA hash function in terms of the different cryptographic properties and parameters. In addition, a change in the nonlinearity value of the proposed S-box was also calculated with reference to that of an AES S-box.

Book ChapterDOI
TL;DR: In this paper, SegNet and U-Net were applied to the microarray dataset of colon cancer (typically containing tumour and normal tissue samples) to extract the culprit/responsible gene.
Abstract: Bioinformatics data can be used for the ultimate prediction of diseases in different organisms. The microarray technology is a special form of 2D representation of genomic data characterized by an enormous number of genes across a handful of samples. The actual analysis of this data involves extraction or selection of the relevant genes from this vast amount of irrelevant and redundant data. These genes can be further used to predict classes of unknown samples. In this work, we have implemented two popular deep learning segmentation architectures, namely, SegNet and U-Net. These techniques have been applied to the microarray dataset of colon cancer (typically containing tumour and normal tissue samples) to extract the culprit/responsible gene. The performance of the reduced set formed from these genes has been compared across different classifiers using different existing methods of feature selection. It is found that both deep learning based approaches outperform the other methods. Lastly, the biological significance of the genes has also been verified using ontological tools, and the results are significant.

Posted Content
TL;DR: In this article, the authors introduce scattering transform for speech emotion recognition (SER) which generates feature representations which remain stable to deformations and shifting in time and frequency without much loss of information.
Abstract: This paper introduces scattering transform for speech emotion recognition (SER). Scattering transform generates feature representations which remain stable to deformations and shifting in time and frequency without much loss of information. In speech, the emotion cues are spread across time and localised in frequency. The time and frequency invariance characteristic of scattering coefficients provides a representation robust against emotion irrelevant variations e.g., different speakers, language, gender etc. while preserving the variations caused by emotion cues. Hence, such a representation captures the emotion information more efficiently from speech. We perform experiments to compare scattering coefficients with standard mel-frequency cepstral coefficients (MFCCs) over different databases. It is observed that frequency scattering performs better than time-domain scattering and MFCCs. We also investigate layer-wise scattering coefficients to analyse the importance of time shift and deformation stable scalogram and modulation spectrum coefficients for SER. We observe that layer-wise coefficients taken independently also perform better than MFCCs.

Book ChapterDOI
01 Jan 2021
TL;DR: In this article, the authors have attempted to discover potential and accurate gene indicators from the gene expression data by using a well-known quantitative measure called quantum clustering, where the total estimate of clusters formed is not predetermined but is determined depending on the nature of the data.
Abstract: Proper investigation of cancer has always been of foremost importance for its accurate forecasting, thereby aiding the correct cure. Microarray-based gene expression profiling is being practised for this purpose making it one of the leading research interests for discovering gene clusters accountable for a particular behavior. Big data analytics provides an efficient way to seek facts about the biological processes inherent from this microarray data. Previously, many attempts have been made to achieve this using numerous clustering approaches, but the results were quite deviating from the reality. In this work, we have attempted to discover potential and accurate gene indicators from the gene expression data by using a well-known quantitative measure called quantum clustering. The characteristic feature of this concept is that the total estimate of clusters formed is not predetermined but is determined depending on the nature of the data. As the concept is established on the grounds that a cluster is formed by density wise spaces, where the center is formed based on the density maxima point, this motivated us to detect those clusters which may be engaged in a certain biological process. The clustering approach becomes privileged in that extremely dense spaces are inherently detected and combined to produce arbitrarily shaped clusters without regarding the dimension of the space. For the purpose of comparing the results obtained, we have also applied a non-parametric measure, namely, the mean shift clustering on the gene expression data. For validation purpose, we used DAVID to check the significance of the clusters created. Results show that the genes so discovered are highly indicative in the pursuit of rare diseases.

Book ChapterDOI
01 Jan 2021
TL;DR: A clustering-based feature selection algorithm to select the particular gene responsible for a particular disease has been proposed and compared with two other well-established feature selection techniques under three different classification approaches, in terms of accuracy, precision, recall and F-score.
Abstract: Genes are the blueprint for all activities of living systems that help them to sustain and have a stable life cycle under normal conditions. Any mistakes in the genetic regulation can disturb their synchronous activity and cause a disease. Due to this, identifying the particular disease-causing genes is very significant research area in bioinformatics. In this paper, a clustering-based feature selection algorithm to select the particular gene responsible for a particular disease has been proposed by us. We have used a well-established clustering algorithm, mean shift clustering for this purpose. Mathematically, we can say that each cluster will represent genes having characteristics different from genes in other clusters. From each cluster, we shall fetch the cluster centres only and test our model on the dataset with reduced dimension. We have opted for density-based approach for its ability to predict the number of clusters by itself. Our algorithm is experimented on benchmark datasets which are publicly available and compared with two other well-established feature selection techniques under three different classification approaches, in terms of accuracy, precision, recall and F-score. Our proposed algorithm performed well in most of the cases.

Book ChapterDOI
01 Jan 2021
TL;DR: In this paper, a detailed offline multiple fault diagnosis has been presented in conjunction with a detection technique for cross-referencing digital microfluidic biochips, which can detect multiple faults anywhere within a cross-reference chip that satisfies the electrode interference challenge as well as dynamic fluidic constraints.
Abstract: In this paper, a detailed offline multiple fault diagnosis has been presented in conjunction with a detection technique for cross-referencing digital microfluidic biochips. Because of the fundamental mixed expertise, biochips demonstrate distinctive mechanisms of failure and defects. Therefore, for certifying the dependability of a system, both online and offline test procedures are needed. Due to the technological advancements, designs of the DMFBs are getting upgraded day by day. Thus, cross-referencing type of architecture where pin count has been reduced drastically is also receiving attention. Hence, finding multiple faulty electrodes in this type of architecture has become a great challenge. Here, the proposed algorithm can detect multiple faults anywhere within a cross-referencing chip that satisfies the electrode interference challenge as well as dynamic fluidic constraints. The result analysis shows a significant improvement in the fault diagnosis time.

Posted Content
TL;DR: In this paper, a speaker diarization system developed by the ABSP Laboratory team for the third DIHARD Speech Diarization Challenge is described. But, their primary contribution is to develop acoustic domain identification (ADI) system for speaker diaraization.
Abstract: This report describes the speaker diarization system developed by the ABSP Laboratory team for the third DIHARD speech diarization challenge. Our primary contribution is to develop acoustic domain identification (ADI) system for speaker diarization. We investigate speaker embeddings based ADI system. We apply a domain-dependent threshold for agglomerative hierarchical clustering. Besides, we optimize the parameters for PCA-based dimensionality reduction in a domain-dependent way. Our method of integrating domain-based processing schemes in the baseline system of the challenge achieved a relative improvement of 9.63% and 10.64% in DER for core and full conditions, respectively, for Track 1 of the DIHARD III evaluation set.

Book ChapterDOI
01 Jan 2021
TL;DR: In this article, an edge enabled SDN (FoSDN) architecture has been proposed which helps in reducing the delay, packet drop and latency by enabling computation at the edge of the network.
Abstract: In recent years, edge computing has been widely used for the Internet of Things (IoT). The general prospect of edge computation is to provide computation near to the field of data generation. This helps to reduce delay and packet drop which are important parameter need to consider while designing an IoT network. The Software-Defined Network (SDN) is a convenient technology that can be used to perform computation at the edge level instead of cloud to enhance the network performance. The architectural advantage of SDN can help in providing a more robust solution for edge-based solutions in IoT networks. In this paper, an edge enabled SDN (FoSDN) architecture has been proposed which helps in reducing the delay, packet drop and latency by enabling computation at the edge of the network. The performance of the proposed FoSDN architecture was examined using the Mininet-WiFi simulator. The simulation results of FoSDN architecture were found to be satisfactory.

Book ChapterDOI
01 Jan 2021
TL;DR: In this article, works related to denoising for brain magnetic resonance imaging (MRI) and cardiac echo have been studied and implemented using mean square error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM).
Abstract: Preprocessing plays a vital role in image for reducing noise and other unwanted data. There is a need for preprocessing in many fields, including medical imaging. Many of the medical images are contaminated with noise. In this paper, works related to denoising for brain magnetic resonance imaging (MRI) and cardiac echo have been studied and implemented. Numerous types of traditional filters were compared for this purpose using mean square error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM).