scispace - formally typeset
Search or ask a question
Author

Senthil Murugan

Bio: Senthil Murugan is an academic researcher from Amrita Vishwa Vidyapeetham. The author has contributed to research in topics: Multiplier (economics) & Encoder. The author has an hindex of 3, co-authored 12 publications receiving 29 citations.

Papers
More filters
Proceedings ArticleDOI
01 Apr 2017
TL;DR: This paper is mainly investigates the design of image Steganography, where steganography is depicted in different ways and the method used for describing the design is the LSB XOR substitution method whereas it improves the security usage.
Abstract: This paper is mainly investigates the design of image steganography, where steganography is depicted in different ways Steganography was first initiated from Greek, although ancient people made a conception to communicate without the knowledge of the third person Steganography is virtually categorized into 2 words where “stegano/stego” indicate as “covered” (where the messages are being hidden) and “graphy” indicate as writing Here the method is used for describing the design is the LSB XOR substitution method whereas it improves the security usage It is one of the more appropriate and simplest method among the other function It is a method of data hiding method that is been used in favor of security purposes This paper is also attached to give a brief idea through algorithm using encryption and decryption technique which showcase the security betterment Here we have a random 8 bit secret key that initially XOR with the RGB colors which leads to hold the embedding of data sharing and then after replace the LSB of pixel we get extracted data which is an actual message from encoding method It has the storage capacity to hide a large number of characteristics than the existing features of data hiding and also data sharing The Modern steganography is used to provide a developed security on mobiles devices such as ensuring a security on the android and check and verify the latest operating system to add up some betterment in the device

14 citations

Proceedings ArticleDOI
03 Mar 2016
TL;DR: The proposed JPEG-LS Algorithm based on LOCO-I is implemented in MATLAB and uses a predictive technique and the resulting prediction error is encoded using Golomb-Rice coding.
Abstract: The LOCO-I / JPEG-LS algorithm aims at providing lossless compression ratios but with a much lower algorithm complexity. Official designation of JPEG-LS is ISO-14495-1/ITU-T.87. JPEG-LS is a simple and efficient algorithm that mainly consists of two stages modeling and encoding. Thus it divides the whole compression process in two phases of spatial pixel prediction and entropy coding and uses contexts in the first as well as the second phase. The algorithm uses a predictive technique and the resulting prediction error is encoded using Golomb-Rice coding. The proposed JPEG-LS Algorithm based on LOCO-I is implemented in MATLAB.

7 citations

Proceedings ArticleDOI
01 Mar 2016
TL;DR: This paper proposes neural network based Proportional-Integral-Derivative (PID) controller for the purpose of position control in DC motor that allows both type of systems i.e. linear and nonlinear systems by training the network.
Abstract: This paper proposes neural network based Proportional-Integral-Derivative (PID) controller for the purpose of position control in DC motor. The conventional PID controller is used in industries for control operation. But, it is difficult to regulate the parameters for the PID controller. An artificial neural network can be used for tuning the PID controller and is robust in design. The artificial neural network based controller allows both type of systems i.e. linear and nonlinear systems by training the network. Simulation is performed in MATLAB.

7 citations

Journal ArticleDOI
02 Mar 2020-PeerJ
TL;DR: The implementation results show that there is barely any change in the LUTs used and power dissipation due to the insertion of the proposed Trojan circuits, thus establishing the surreptitious nature of the Trojan.
Abstract: Integrated circuits may be vulnerable to hardware Trojan attacks during its design or fabrication phases. This article is a case study of the design of a Viterbi decoder and the effect of hardware Trojans on a coded communication system employing the Viterbi decoder. Design of a Viterbi decoder and possible hardware Trojan models for the same are proposed. An FPGA-based implementation of the decoder and the associated Trojan circuits have been discussed. The noise-added encoded input data stream is stored in the block RAM of the FPGA and the decoded data stream is monitored on the PC through an universal asynchronous receiver transmitter interface. The implementation results show that there is barely any change in the LUTs used (0.5%) and power dissipation (3%) due to the insertion of the proposed Trojan circuits, thus establishing the surreptitious nature of the Trojan. In spite of the fact that the Trojans cause negligible changes in the circuit parameters, there are significant changes in the bit error rate (BER) due to the presence of Trojans. In the absence of Trojans, BER drops down to zero for signal to noise rations (SNRs) higher than 6 dB, but with the presence of Trojans, BER doesn’t reduce to zero even at a very high SNRs. This is true even with the Trojan being activated only once during the entire duration of the transmission.

4 citations

Proceedings ArticleDOI
01 Apr 2019
TL;DR: This paper focuses on bringing out the performance variations of a Max log MAP algorithm based turbo decoder on implemented with fixed point, Vedic and Booth multipliers.
Abstract: The framework of convolutional coding persisting today incorporates decoder designs whose performance varies with the underlying algorithm’s efficiency. Traditional decoder design methodology started from Viterbi’s algorithm, and is currently trending towards different implementations of MAP algorithm. As process and technology advances from the basic fixed point multiplier to the present Booth multiplier, the decoding performance varies. Optimization of decoder’s performance and hence its reliability can be enhanced by the implementation of more systematic and efficient underlying multipliers. This paper focuses on bringing out the performance variations of a Max log MAP algorithm based turbo decoder on implemented with fixed point, Vedic and Booth multipliers.

4 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This research concludes that SSIM is a better measure of imperceptibility in all aspects and it is preferable that in the next steganographic research at least use SSIM.
Abstract: Peak signal to noise ratio (PSNR) and structural index similarity (SSIM) are two measuring tools that are widely used in image quality assessment. Especially in the steganography image, these two measuring instruments are used to measure the quality of imperceptibility. PSNR is used earlier than SSIM, is easy, has been widely used in various digital image measurements, and has been considered tested and valid. SSIM is a newer measurement tool that is designed based on three factors i.e. luminance, contrast, and structure to better suit the workings of the human visual system. Some research has discussed the correlation and comparison of these two measuring tools, but no research explicitly discusses and suggests which measurement tool is more suitable for steganography. This study aims to review, prove, and analyze the results of PSNR and SSIM measurements on three spatial domain image steganography methods, i.e. LSB, PVD, and CRT. Color images were chosen as container images because human vision is more sensitive to color changes than grayscale changes. Based on the test results found several opposing findings, where LSB has the most superior value based on PSNR and PVD get the most superior value based on SSIM. Additionally, the changes based on the histogram are more noticeable in LSB and CRT than in PVD. Other analyzes such as RS attack also show results that are more in line with SSIM measurements when compared to PSNR. Based on the results of testing and analysis, this research concludes that SSIM is a better measure of imperceptibility in all aspects and it is preferable that in the next steganographic research at least use SSIM.

204 citations

Book ChapterDOI
19 Jul 2020
TL;DR: In this article, the authors discuss the theoretical impact of explainability on trust towards AI, and demonstrate how the usage of explainable artificial intelligence in a health-related setting can look like.
Abstract: Computer Vision, and hence Artificial Intelligence-based extraction of information from images, has increasingly received attention over the last years, for instance in medical diagnostics. While the algorithms’ complexity is a reason for their increased performance, it also leads to the ‘black box’ problem, consequently decreasing trust towards AI. In this regard, “Explainable Artificial Intelligence” (XAI) allows to open that black box and to improve the degree of AI transparency. In this paper, we first discuss the theoretical impact of explainability on trust towards AI, followed by showcasing how the usage of XAI in a health-related setting can look like. More specifically, we show how XAI can be applied to understand why Computer Vision, based on deep learning, did or did not detect a disease (malaria) on image data (thin blood smear slide images). Furthermore, we investigate, how XAI can be used to compare the detection strategy of two different deep learning models often used for Computer Vision: Convolutional Neural Network and Multi-Layer Perceptron. Our empirical results show that i) the AI sometimes used questionable or irrelevant data features of an image to detect malaria (even if correctly predicted), and ii) that there may be significant discrepancies in how different deep learning models explain the same prediction. Our theoretical discussion highlights that XAI can support trust in Computer Vision systems, and AI systems in general, especially through an increased understandability and predictability.

34 citations

Dissertation
01 Jan 2000

28 citations

Journal ArticleDOI
TL;DR: A procedural approach to an ANN, which was trained, validated, and tested in 10 meteorological stations in central Chile for approximately 8 yr, showed good performance in predicting minimum temperature and Frost detection results had an appropriate 98% overall mean accuracy, 86% sensitivity, and 2% error rate.
Abstract: Predicting future climatic events is one of the key issues in many fields, whether in scientific or industrial areas. An artificial neural network (ANN) model, based on a backpropagation type, was developed in this study to predict the minimum air temperature of the following day from meteorological data using air temperature, relative humidity, radiation, precipitation, and wind direction and speed to detect the occurrence of radiative frost events. The configuration of the next day ANN prediction system allows operating with low-power computing machines; it is able to generate early warnings that can lead to the development of effective strategies to reduce crop damage, lower quality, and losses in agricultural production. This paper presents a procedural approach to an ANN, which was trained, validated, and tested in 10 meteorological stations in central Chile for approximately 8 yr (2010-2017). The overall mean results were classified by a confusion matrix and showed good performance in predicting minimum temperature with a mean square error (MSE) of 2.99 oC for the network, 1.71 oC for training, 1.77 oC for validation, and 1.74 oC for the testing processes. Frost detection results had an appropriate 98% overall mean accuracy (ACC), 86% sensitivity (TPR), and 2% error rate (ER). Differences and errors in frost detection can be attributed to several factors that are mainly associated with the accuracy of the sensors meteorological stations, local climatic and geographic conditions, and the number of parameters that enter in the ANN training processes.

20 citations

Journal ArticleDOI
TL;DR: The results suggest that between host microbiome variation within the Sable Island horse population is driven more strongly by bacterial dispersal and ecological drift than by differential selective pressures, emphasizing the need to consider alternative ecological processes in the study of microbiomes.
Abstract: Studies of microbiome variation in wildlife often emphasize host physiology and diet as proximate selective pressures acting on host-associated microbiota. In contrast, microbial dispersal and ecological drift are more rarely considered. Using amplicon sequencing, we characterized the bacterial microbiome of adult female (n = 86) Sable Island horses (Nova Scotia, Canada) as part of a detailed individual-based study of this feral population. Using data on sampling date, horse location, age, parental status, and local habitat variables, we contrasted the ability of spatiotemporal, life history, and environmental factors to explain microbiome diversity among Sable Island horses. We extended inferences made from these analyses with both phylogeny-informed and phylogeny-independent null modelling approaches to identify deviations from stochastic expectations. Phylogeny-informed diversity measures were correlated with spatial and local habitat variables, but null modelling results suggested that heterogeneity in ecological drift, rather than differential selective pressures acting on the microbiome, was responsible for these correlations. Conversely, phylogeny-independent diversity measures were best explained by host spatial and social structure, suggesting that taxonomic composition of the microbiome was shaped most strongly by bacterial dispersal. Parental status was important but correlated with measures of β-dispersion rather than β-diversity (mares without foals had lower alpha diversity and more variable microbiomes than mares with foals). Our results suggest that between host microbiome variation within the Sable Island horse population is driven more strongly by bacterial dispersal and ecological drift than by differential selective pressures. These results emphasize the need to consider alternative ecological processes in the study of microbiomes.

19 citations