scispace - formally typeset
Search or ask a question
Author

S. Kalaivani

Bio: S. Kalaivani is an academic researcher from VIT University. The author has contributed to research in topics: Change detection & Adaptive filter. The author has an hindex of 4, co-authored 11 publications receiving 47 citations.

Papers
More filters
Proceedings ArticleDOI
20 Apr 2017
TL;DR: In the proposed work, image processing algorithms and artificial neural network have been employed to design an automated process for early stage detection of lung cancer.
Abstract: Cancer detection is generally carried out manually by trained professionals and these techniques are majorly helpful in the advanced stage detection, also it involves a very tedious procedure and highly dependent on the given individual. This introduces the high possibility of human error in the detection process which necessitate an automated process. Hence, this paper aims at early detection of cancer through an automated process to minimize human error and making the process more accurate and hassle-free. In the proposed work, image processing algorithms and artificial neural network have been employed to design an automated process for early stage detection of lung cancer.

29 citations

Journal ArticleDOI
TL;DR: A new method is developed for correcting the intensity inhomogeneity using a non-iterative multi-scale approach that doesn't necessitate segmentation and any prior knowledge on the scanner or subject.

22 citations

Proceedings ArticleDOI
01 Nov 2017
TL;DR: To remove white Gaussian noise, discrete wavelet transform technique is used, and each of the techniques show an increased Signal to Noise Ratio (SNR) after processing, as seen in the simulation results.
Abstract: For greater advancement in future communication, efficient noise reduction algorithms with lesser complexity are a necessity. Noise in audio signal poses a great challenge in speech recognition, speech communication, speech enhancement and transmission. Hence the most efficient algorithm for noise reduction must be chosen in such a way that the cost for noise removal is a less as possible, but a large portion of noise is removed. The common method for the removal of noise is optimal linear filtering method, and some algorithms in this method are Wiener filtering, Kalman filtering and spectral subtraction technique. Here, the noise signal is passed through a filter or transformation. However, due to the complexity of these algorithms, there are better algorithms like Signal Dependent Rank Order Mean algorithm (SD-ROM), which removes noise from audio signals and retains the characteristics of the signal. The algorithm can be adjusted depending on the characteristics of noise signal too. To remove white Gaussian noise, discrete wavelet transform technique is used. After each of the techniques are applied to the samples, SNR and elapsed time are calculated. All of the above techniques show an increased Signal to Noise Ratio (SNR) after processing, as seen in the simulation results.

18 citations

Journal ArticleDOI
TL;DR: An effective retrospective correction method for intensity inhomogeneity which is an inherent artifact in MR images and doesn't require any preprocessing, any predefined specifications or parametric models that are critically controlled by user-defined parameters is introduced.

10 citations

Journal ArticleDOI
TL;DR: Qualitatively and quantitatively the proposed feature based approach has been analyzed with application to spectral unmixing by comparing with two well-known existing dimension reduction techniques namely principal Component Analysis and Linear Discriminant Analysis.
Abstract: In this paper an approach for dimension reduction of the hyperspectral image using scale invariant feature transform (SIFT) is introduced. Due to high dimensionality of hyperspectral cubes, it is a very difficult task to select few informative bands from original hyperspectral remote sensing images. Band with maximum amount of non-redundant information are chosen using the dissimilarity matrix obtained from scale invariant feature transformed image. The performance of the dimension reduction technique is analyzed by implementing a post-processing technique named spectral un-mixing. Spectral unmixing is the process of extracting end members and generating their abundance maps. End members are extracted with these selected informative bands using N-FINDR and abundance maps are generated using fully constrained least square estimation. The simulation software used for implementation of algorithms is MATLAB. Qualitatively and quantitatively the proposed feature based approach has been analyzed with application to spectral unmixing by comparing with two well-known existing dimension reduction techniques namely principal Component Analysis and Linear Discriminant Analysis. Hyperspectral images finds application in astronomy, agriculture, geosciences and surveillance.

2 citations


Cited by
More filters
Proceedings ArticleDOI
23 Apr 2019
TL;DR: The algorithm for lung cancer detection is proposed using methods such as median filtering for image pre- processing followed by segmentation of lung region of interest using mathematical morphological operations.
Abstract: Cancer is one of the most serious and widespread disease that is responsible for large number of deaths every year. Among all different types of cancers, lung cancer is the most prevalent cancer having the highest mortality rate. Computed tomography scans are used for identification of lung cancer as it provides detailed picture of tumor in the body and tracks its growth. Although CT is preferred over other imaging modalities, visual interpretation of these CT scan images may be an error prone task and can cause delay in lung cancer detection. Therefore, image processing techniques are used widely in medical fields for early stage detection of lung tumor. This paper presents an automated approach for detection of lung cancer in CT scan images. The algorithm for lung cancer detection is proposed using methods such as median filtering for image pre- processing followed by segmentation of lung region of interest using mathematical morphological operations. Geometrical features are computed from the extracted region of interest and used to classify CT scan images into normal and abnormal by using support vector machine.

45 citations

Proceedings ArticleDOI
01 Apr 2019
TL;DR: This paper applied Principal Component Analysis, K-Nearest Neighbors, Support Vector Machines, Naïve Bayes, Decision Trees and Artificial Neural Networks machine learning methods to detect anomaly and compared all methods both after preprocessing and without preprocessing.
Abstract: Lung cancer is a kind of difficult to diagnose and dangerous cancer. It commonly causes death both men and women so fast accurate analysis of nodules is more important for treatment. Various methods have been used for detecting cancer in early stages. In this paper, machine learning methods compared while detect lung cancer nodule. We applied Principal Component Analysis, K-Nearest Neighbors, Support Vector Machines, Naive Bayes, Decision Trees and Artificial Neural Networks machine learning methods to detect anomaly. We compared all methods both after preprocessing and without preprocessing. The experimental results show that Artificial Neural Networks gives the best result with 82,43% accuracy after image processing and Decision Tree gives the best result with 93,24% accuracy without image processing.

26 citations

Journal ArticleDOI
TL;DR: Experimental results on inhomogeneous medical images indicate the superiority of the FLIC model over the other state-of-the-art segmentation methods in terms of accuracy, robustness, and computational time.
Abstract: Intensity inhomogeneity is one of the main challenges in automatic medical image segmentation. In this paper, fuzzy local intensity clustering (FLIC), which is based on the combination of level set algorithm and fuzzy clustering, is proposed to mitigate the effect of intensity variation and noise contamination. For the FLIC method, the segmentation and bias modification are carried out in a fully automatic and simultaneous manner through the local clustering of intensity and selection of the initial contour by the fuzzy method. Besides, the local entropy is integrated into the FLIC function to improve the contour evolution. Experimental results on inhomogeneous medical images indicate the superiority of the FLIC model over the other state-of-the-art segmentation methods in terms of accuracy, robustness, and computational time.

16 citations

Journal ArticleDOI
TL;DR: A new model for simultaneous intensity bias correction and destriping through introducing two sparsity constraints is presented, fundamentally different from the existing denoising techniques and simultaneously estimates the sharp image, intensity bias, and stripe components.
Abstract: Infrared (IR) images are often contaminated by obvious intensity bias and stripes, which severely affect the visual quality and subsequent applications. It is challenging to eliminate simultaneously the mixed nonuniformity noise without blurring the fine-image details in low-textured IR images. In this article, we present a new model for simultaneous intensity bias correction and destriping through introducing two sparsity constraints. One is that model fit on the intensity bias should be as accurate as possible. A bivariate polynomial model is built to characterize the global smoothness of the intensity bias. The other constraint is that the unidirectional variational sparse model can concisely represent the direction characteristic of stripe noise. A computationally efficient numerical algorithm based on split Bregman iteration is used to solve the complex optimization problem. The proposed method is fundamentally different from the existing denoising techniques and simultaneously estimates the sharp image, intensity bias, and stripe components. Significant improvement on image quality is achieved on both simulated and real studies. Both qualitative and quantitative comparisons with the state-of-the-art correction methods demonstrate its superiority.

15 citations

Proceedings ArticleDOI
01 Dec 2018
TL;DR: This work focuses on the outliers and Mean Filter to improve the performance for Gaussian noise reduction from the image and shows that the proposed approach improves the performance in noise reduction over other filter approaches.
Abstract: In the Multimedia era, removal of the Noises from an image becomes a key challenge in the field of Digital Image Processing (DIP) and Computer Vision. Noise may be mixed with an image during capturing time, transmission time or due to dust particle on the screen of capturing device. Therefore, removal of these unwanted signals from the image is urgently required for the better analysis of the image and the de-noised image is more meaningful for Object detection, Edge detection and many more. There are various types of image noise, however, Gaussian Noise and Impulse Noise are commonly found in the image. This work focuses on the outliers and Mean Filter to improve the performance for Gaussian noise reduction from the image. In experimental assessments, artificial noise has been mixed using MATLAB to MSRA (10k images) dataset, this dataset is used to evaluate our proposed technique. The experiment results show that the proposed approach improves the performance in noise reduction over other filter approaches.

14 citations