scispace - formally typeset
Search or ask a question
Author

Somsak Choomchuay

Bio: Somsak Choomchuay is an academic researcher from King Mongkut's Institute of Technology Ladkrabang. The author has contributed to research in topics: Low-density parity-check code & Concatenated error correction code. The author has an hindex of 10, co-authored 55 publications receiving 275 citations.

Papers published on a yearly basis

Papers
More filters
Proceedings ArticleDOI
01 Dec 2016
TL;DR: This paper automatically detect as well as to classify the severity of diabetic retinopathy by applying artificial neural network (ANN) and found that the system can give the classification accuracy of 96% and it supports a great help to ophthalmologists.
Abstract: Diabetes retinopathy is a retinal disease that is affected by diabetes on the eyes. The main risk of the disease can lead to blindness. Detection the disease at early stage can rescue the patients from loss of vision. The major purpose of this paper is to automatically detect as well as to classify the severity of diabetic retinopathy. At first, the lesions on the retina especially blood vessels, exudates and microaneurysms are extracted. Features such as area, perimeter and count from these lesions are used to classify the stages of the disease by applying artificial neural network (ANN). We used 214 fundus images from DIARECTDB1 and local databases. We found that the system can give the classification accuracy of 96% and it supports a great help to ophthalmologists.

35 citations

Proceedings ArticleDOI
01 Mar 2017
TL;DR: A simple method that is capable to segment the nuclei of the variety of cells from microscopic cytology pleural fluid images is proposed and the results are very promising.
Abstract: The automated segmentation of cell nuclei is critical for diagnosis and classification of cancers in pleural fluid. This task is very essential since the morphology of cell nuclei such as the size, shape and stained color are mainly associated with cells proliferation and malignancy. It remains challenging due to the inconsistent stained color, poor contrast, the variety of cells, and the large amount of cells, cell overlapping and other microscopic imaging artifacts. In this paper, we proposed a simple method that is capable to segment the nuclei of the variety of cells from microscopic cytology pleural fluid images. In the proposed method, the original image is enhanced firstly using median filter. Next, the enhanced image is converted into l*a*b* color space and extract l* and b* component. The cell nuclei are segmented using OTSU thresholding as the binary image. Finally, morphological operations are used to eliminate the undesirable artifacts and reconstruct into color segmented image. The proposed method is tested with 25 Papanicolaou (Pap) stained pleural fluid images. The method is relatively simple and the results are very promising.

30 citations

Journal ArticleDOI
TL;DR: A novel Computer Aided Diagnosis (CAD) system for the detection of cancer cells in cytological pleural effusion (CPE) images and a novel hybrid feature selection method based on simulated annealing combined with an artificial neural network was developed to select the most discriminant and biologically interpretable features.
Abstract: Cytological screening plays a vital role in the diagnosis of cancer from the microscope slides of pleural effusion specimens. However, this manual screening method is subjective and time-intensive and it suffers from inter- and intra-observer variations. In this study, we propose a novel Computer Aided Diagnosis (CAD) system for the detection of cancer cells in cytological pleural effusion (CPE) images. Firstly, intensity adjustment and median filtering methods were applied to improve image quality. Cell nuclei were extracted through a hybrid segmentation method based on the fusion of Simple Linear Iterative Clustering (SLIC) superpixels and K-Means clustering. A series of morphological operations were utilized to correct segmented nuclei boundaries and eliminate any false findings. A combination of shape analysis and contour concavity analysis was carried out to detect and split any overlapped nuclei into individual ones. After the cell nuclei were accurately delineated, we extracted 14 morphometric features, 6 colorimetric features, and 181 texture features from each nucleus. The texture features were derived from a combination of color components based first order statistics, gray level cooccurrence matrix and gray level run-length matrix. A novel hybrid feature selection method based on simulated annealing combined with an artificial neural network (SA-ANN) was developed to select the most discriminant and biologically interpretable features. An ensemble classifier of bagged decision trees was utilized as the classification model for differentiating cells into either benign or malignant using the selected features. The experiment was carried out on 125 CPE images containing more than 10500 cells. The proposed method achieved sensitivity of 87.97%, specificity of 99.40%, accuracy of 98.70%, and F-score of 87.79%.

26 citations

Journal ArticleDOI
TL;DR: A comparative study of twelve nuclei segmentation methods for cytology pleural effusion images shows that the segmentation performances of the Otsu, k-means, mean shift, Chan–Vese, and graph cut methods are 94, 94, 95, 94 and 93%, respectively, with high abnormal nuclei detection rates.
Abstract: Automated cell nuclei segmentation is the most crucial step toward the implementation of a computer-aided diagnosis system for cancer cells. Studies on the automated analysis of cytology pleural effusion images are few because of the lack of reliable cell nuclei segmentation methods. Therefore, this paper presents a comparative study of twelve nuclei segmentation methods for cytology pleural effusion images. Each method involves three main steps: preprocessing, segmentation, and postprocessing. The preprocessing and segmentation stages help enhancing the image quality and extracting the nuclei regions from the rest of the image, respectively. The postprocessing stage helps in refining the segmented nuclei and removing false findings. The segmentation methods are quantitatively evaluated for 35 cytology images of pleural effusion by computing five performance metrics. The evaluation results show that the segmentation performances of the Otsu, k-means, mean shift, Chan–Vese, and graph cut methods are 94, 94, 95, 94, and 93%, respectively, with high abnormal nuclei detection rates. The average computational times per image are 1.08, 36.62, 50.18, 330, and 44.03 seconds, respectively. The findings of this study will be useful for current and potential future studies on cytology images of pleural effusion.

17 citations

Journal ArticleDOI
TL;DR: The proposed algorithm can serve as a new supportive tool in the automated diagnosis of cancer cells from cytology images and yields a superior performance to previous studies and other classifiers.
Abstract: Due to the close resemblance between overlapping and cancerous nuclei, the misinterpretation of overlapping nuclei can affect the final decision of cancer cell detection Thus, it is essential to detect overlapping nuclei and distinguish them from single ones for subsequent quantitative analyses This paper presents a method for the automated detection and classification of overlapping nuclei from single nuclei appearing in cytology pleural effusion (CPE) images The proposed system is comprised of three steps: nuclei candidate extraction, dominant feature extraction, and classification of single and overlapping nuclei A maximum entropy thresholding method complemented by image enhancement and post-processing was employed for nuclei candidate extraction For feature extraction, a new combination of 16 geometrical and 10 textural features was extracted from each nucleus region A double-strategy random forest was performed as an ensemble feature selector to select the most relevant features, and an ensemble classifier to differentiate between overlapping nuclei and single ones using selected features The proposed method was evaluated on 4000 nuclei from CPE images using various performance metrics The results were 966% sensitivity, 987% specificity, 927% precision, 946% F1 score, 984% accuracy, 976% G-mean, and 99% area under curve The computation time required to run the entire algorithm was just 517 s The experiment results demonstrate that the proposed algorithm yields a superior performance to previous studies and other classifiers The proposed algorithm can serve as a new supportive tool in the automated diagnosis of cancer cells from cytology images

16 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Comparative results indicate that the new ( t, f ) features give better performance as compared to time-only or frequency-only features for the detection of abnormalities in newborn EEG signals.

169 citations

Journal ArticleDOI
TL;DR: This work presents a novel CNN model to extract features from retinal fundus images for better classification performance and results indicate that the proposed feature extraction technique along with the J48 classifier outperforms all the other classifiers for MESSIDOR, IDRiD, and KAGGLE datasets.

99 citations

Journal ArticleDOI
TL;DR: This survey provides a comprehensive study of the state of the art approaches based on deep learning for the analysis of cervical cytology images and introduces deep learning and its simplified architectures that have been used in this field.
Abstract: Cervical cancer is one of the most common and deadliest cancers among women. Despite that, this cancer is entirely treatable if it is detected at a precancerous stage. Pap smear test is the most extensively performed screening method for early detection of cervical cancer. However, this hand-operated screening approach suffers from a high false-positive result because of human errors. To improve the accuracy and manual screening practice, computer-aided diagnosis methods based on deep learning is developed widely to segment and classify the cervical cytology images automatically. In this survey, we provide a comprehensive study of the state of the art approaches based on deep learning for the analysis of cervical cytology images. Firstly, we introduce deep learning and its simplified architectures that have been used in this field. Secondly, we discuss the publicly available cervical cytopathology datasets and evaluation metrics for segmentation and classification tasks. Then, a thorough review of the recent development of deep learning for the segmentation and classification of cervical cytology images is presented. Finally, we investigate the existing methodology along with the most suitable techniques for the analysis of pap smear cells.

85 citations

Proceedings ArticleDOI
01 Jan 2019
TL;DR: A comparative classification of Pneumonia using Convolution Neural Network using the dataset Labeled Optical Coherence Tomography and Chest X-Ray Images for Classification with an average accuracy of 95.30 % was described.
Abstract: In this paper we describe a comparative classification of Pneumonia using Convolution Neural Network. The database used was the dataset Labeled Optical Coherence Tomography (OCT) and Chest X-Ray Images for Classification made available by (Kermany, 2018) with a total of 5863 images, with 2 classes: normal and pneumonia. To evaluate the generalization capacity of the models, cross-validation of k-fold was used. The classification models proved to be efficient compared to the work of (Kermany et al., 2018) which obtained 92.8 % and the present work had an average accuracy of 95.30 %.

75 citations

Journal ArticleDOI
TL;DR: Mamunur et al. as discussed by the authors proposed DeepCervix, a hybrid deep feature fusion (HDFF) technique based on DL, to classify the cervical cells accurately, which achieved the state-of-the-art classification accuracy of 99.85%, 99.38%, and 99.14% for 2-class, 3-class and 5-class classification.

75 citations