scispace - formally typeset
Search or ask a question
Conference

Biomedical Engineering International Conference 

About: Biomedical Engineering International Conference is an academic conference. The conference publishes majorly in the area(s): Image segmentation & Self-healing hydrogels. Over the lifetime, 779 publications have been published by the conference receiving 2812 citations.

Papers published on a yearly basis

Papers
More filters
Proceedings ArticleDOI
01 Nov 2018
TL;DR: Huang et al. as discussed by the authors used a transfer learning scheme to classify lung cancer using chest X-ray images, which achieved 74.43±6.01% of mean accuracy, 74.96±9.85% of average specificity, and 74.68±15.33% of overall sensitivity.
Abstract: Since, cancer is curable when diagnosed at an early stage, lung cancer screening plays an important role in preventive care. Although both low dose computed tomography (LDCT) and computed tomography (CT) scans provide greater medical information than normal chest x-rays, access to these technologies in rural areas is very limited. There is a recent trend toward using computer-aided diagnosis (CADx) to assist in the screening and diagnosis of cancer from biomedical images. In this study, the 121-layer convolutional neural network, also known as DenseNet-121 by G. Huang et. al., along with the transfer learning scheme is explored as a means of classifying lung cancer using chest x-ray images. The model was trained on a lung nodule dataset before training on the lung cancer dataset to alleviate the problem of using a small dataset. The proposed model yields 74.43±6.01% of mean accuracy, 74.96±9.85% of mean specificity, and 74.68±15.33% of mean sensitivity. The proposed model also provides a heatmap for identifying the location of the lung nodule. These findings are promising for further development of chest x-ray-based lung cancer diagnosis using the deep learning approach. Moreover, they solve the problem of a small dataset.

93 citations

Proceedings ArticleDOI
01 Aug 2017
TL;DR: An improved image enhancement on digital chest radiography using the so-called N-CLAHE method, which is based on global and local enhancement, which yields great improvement on the pre-processing correction for digitalchest radiography.
Abstract: Digital chest radiography offers many advantages over filmbased radiography, such as immediate image display, no film processing and room storage, wider dynamic range and lower radiation dose. In general, a raw X-ray image acquired directly from a digital flat detector contains poor quality of image, which may not be suitable for diagnosis and treatment planning. Therefore, a pre-processing technique is usually required to enhance image quality. This paper presents an improved image enhancement on digital chest radiography using the so-called N-CLAHE method, which is based on global and local enhancement. The proposed technique consists of two main steps. Firstly, intensity correction of the raw image is encountered by the log-normalization function which adjusts the intensity contrast of the image dynamically. Secondly, the Contrast Limited Adaptive Histogram Equalization (CLAHE) method is used for enhancing small details, textures and local contrast of the images. The proposed approach was tested using a radiographic survey phantom and a radiographic chest phantom and compared with conventional enhancement methods, such as histogram equalization, unsharp masking, CLAHE. The results show that the proposed N-CLAHE method yields great improvement on the pre-processing correction for digital chest radiography.

49 citations

Proceedings ArticleDOI
22 Mar 2012
TL;DR: The aim of this work is to develop an automatic system with the ability of providing the first assessment to burn injury from burn color images by identifying degree of the burn through segmentation and degree of burn identification.
Abstract: When burn injury occurs, the most important step is to provide treatment to the injury immediately by identifying degree of the burn which can only be diagnosed by specialists. However, specialists for burn trauma are still inadequate for some local hospitals. Hence, the invention of an automatic system that is able to help evaluating the burn would be extremely beneficial to those hospitals. The aim of this work is to develop an automatic system with the ability of providing the first assessment to burn injury from burn color images. The method used in this work can be divided into 2 parts, i.e., burn image segmentation and degree of burn identification. Burn image segmentation employs the Cr-transformation, Luv-transformation and fuzzy c-means clustering technique to separate the burn wound area from healthy skin and then mathematical morphology is applied to reduce segmentation errors. The segmentation algorithm performance is evaluated by the positive predictive value (PPV) and the sensitivity (S). Burn degree identification uses h-transformation and texture analysis to extract feature vectors and the support vector machine (SVM) is applied to identify the degree of burn. The classification results are compared with that of Bayes and K-nearest neighbor classifiers. The experimental results show that our proposed segmentation algorithm yields good results for the burn color images. The PPV and S are about 0.92 and 0.84, respectively. Degree of burn identification experiments show that SVM yields the best results of 89.29 % correct classification on the validation sets of the 4-fold cross validation. SVM also yields 75.33 % correct classification on the blind test experiment.

46 citations

Proceedings ArticleDOI
01 Nov 2018
TL;DR: This work has simplified the idea of classifying patients on basis of 3D MRI but acknowledging the 2D features generated from the CNN framework and shows that this can be better than scratch trained CNN softmax classification based on probability score.
Abstract: Various Convolutional Neural Network (CNN) architecture has been proposed for image classification and Object recognition. For the image based classification, it is a complex task for CNN to deal with hundreds of MRI Image slices, each of almost identical nature in a single patient. So, classifying a number of patients as an AD, MCI or NC based on 3D MRI becomes vague technique using 2D CNN architecture. Hence, to address this issue, we have simplified the idea of classifying patients on basis of 3D MRI but acknowledging the 2D features generated from the CNN framework. We present our idea regarding how to obtain 2D features from MRI and transform it to be applicable to classify using machine learning algorithm. Our experiment shows the result of classifying 3 class subjects patients. We employed scratched trained CNN or pretrained Alexnet CNN as generic feature extractor of 2D image which dimensions were reduced using PCA+TSNE, and finally classifying using simple Machine learning algorithm like KNN, Navies Bayes Classifier. Although the result is not so impressive but it definitely shows that this can be better than scratch trained CNN softmax classification based on probability score. The generated feature can be well manipulated and refined for better accuracy, sensitivity, and specificity.

45 citations

Proceedings ArticleDOI
01 Oct 2013
TL;DR: Glaucoma is classified by extracting two features using retinal fundus images by using Cup to Disc Ratio (CDR) and Ratio of Neuroretinal Rim in inferior, superior, temporal and nasal quadrants to check whether it obeys or violates the ISNT rule.
Abstract: This paper proposes image processing technique for the early detection of glaucoma. Glaucoma is one of the major causes which cause blindness but it was hard to diagnose it in early stages. In this paper glaucoma is classified by extracting two features using retinal fundus images. (i) Cup to Disc Ratio (CDR). (ii) Ratio of Neuroretinal Rim in inferior, superior, temporal and nasal quadrants i.e. (ISNT quadrants) to check whether it obeys or violates the ISNT rule. The novel technique is implemented on 50 retinal images and an accuracy of 94% is achieved taking an average computational time of 1.42 seconds.

45 citations

Performance
Metrics
No. of papers from the Conference in previous years
YearPapers
202259
201969
201877
201772
201668
201577