Bio: Adrian Kucharski is an academic researcher from Lodz University of Technology. The author has contributed to research in topics: Segmentation & Image segmentation. The author has an hindex of 1, co-authored 3 publications receiving 2 citations.
TL;DR: In this paper, the watershed algorithm was used for marker-driven segmentation of corneal endothelial cells and an encoder-decoder convolutional neural network trained in a sliding window set up to predict the probability of cell centers (markers) and cell borders.
Abstract: Quantitive information about corneal endothelium cells’ morphometry is vital for assessing cornea pathologies. Nevertheless, in clinical, everyday routine dominates qualitative assessment based on visual inspection of the microscopy images. Although several systems exist for automatic segmentation of corneal endothelial cells, they exhibit certain limitations. The main one is sensitivity to low contrast and uneven illumination, resulting in over-segmentation. Subsequently, image segmentation results often require manual editing of missing or false cell edges. Therefore, this paper further investigates the problem of corneal endothelium cell segmentation. A fully automatic pipeline is proposed that incorporates the watershed algorithm for marker-driven segmentation of corneal endothelial cells and an encoder-decoder convolutional neural network trained in a sliding window set up to predict the probability of cell centers (markers) and cell borders. The predicted markers are used for watershed segmentation of edge probability maps outputted by a neural network. The proposed method's performance on a heterogeneous dataset comprising four publicly available corneal endothelium image datasets is analyzed. The performance of three convolutional neural network models (i.e., U-Net, SegNet, and W-Net) incorporated in the proposed pipeline is examined. The results of the proposed pipeline are analyzed and compared to the state-of-the-art competitor. The obtained results are promising. Regardless of the convolutional neural model incorporated into the proposed pipeline, it notably outperforms the competitor. The proposed method scored 97.72% of cell detection accuracy, compared to 87.38% achieved by the competitor. The advantage of the introduced method is also apparent for cell size, DICE coefficient, and Modified Hausdorff distance.
TL;DR: This work proposes using accurate Dense U-Net liver segmentation and conducting a comparison between 3D U- net models inside the obtained volumes, and shows that the most accurate setup is the full 3D process, providing the highest Dice for most of the considered models.
Abstract: Accurate liver vessel segmentation is of crucial importance for the clinical diagnosis and treatment of many hepatic diseases. Recent state-of-the-art methods for liver vessel reconstruction mostly utilize deep learning methods, namely, the U-Net model and its variants. However, to the best of our knowledge, no comparative evaluation has been proposed to compare these approaches in the liver vessel segmentation task. Moreover, most research works do not consider the liver volume segmentation as a preprocessing step, in order to keep only inner hepatic vessels, for Couinaud representation for instance. For these reasons, in this work, we propose using accurate Dense U-Net liver segmentation and conducting a comparison between 3D U-Net models inside the obtained volumes. More precisely, 3D U-Net, Dense U-Net, and MultiRes U-Net are pitted against each other in the vessel segmentation task on the IRCAD dataset. For each model, three alternative setups that allow adapting the selected CNN architectures to volumetric data are tested, namely, full 3D, slab-based, and box-based setups are considered. The results showed that the most accurate setup is the full 3D process, providing the highest Dice for most of the considered models. However, concerning the particular models, the slab-based MultiRes U-Net provided the best score. With our accurate vessel segmentations, several medical applications can be investigated, such as automatic and personalized Couinaud zoning of the liver.
01 Jan 2022
TL;DR: The aim of the review is to provide information and advice for practitioners to select the appropriate version of watershed for their problem solving, and to forecast future directions of software development for 3D image segmentation by watershed.
Abstract: Watershed is a widely used image segmentation algorithm. Most researchers understand just an idea of this method: a grayscale image is considered as topographic relief, which is flooded from initial basins. However, frequently they are not aware of the options of the algorithm and the peculiarities of its realizations. There are many watershed implementations in software packages and products. Even if these packages are based on the identical algorithm–watershed, by flooding their outcomes, processing speed, and consumed memory, vary greatly. In particular, the difference among various implementations is noticeable for huge volumetric images; for instance, tomographic 3D images, for which low performance and high memory requirements of watershed might be bottlenecks. In our review, we discuss the peculiarities of algorithms with and without waterline generation, the impact of connectivity type and relief quantization level on the result, approaches for parallelization, as well as other method options. We present detailed benchmarking of seven open-source and three commercial software implementations of marker-controlled watershed for semantic or instance segmentation. We compare those software packages for one synthetic and two natural volumetric images. The aim of the review is to provide information and advice for practitioners to select the appropriate version of watershed for their problem solving. In addition, we forecast future directions of software development for 3D image segmentation by watershed.
TL;DR: In this paper , the authors presented a new deep learning-based approach, which automatically detects brain tumors using Magnetic Resonance (MR) images, using convolutional and fully connected layers of a new residual-CNN model trained from scratch.
Abstract: One of the most dangerous diseases in the world is brain tumors. After the brain tumor destroys healthy tissues in the brain, it multiplies abnormally, causing an increase in the internal pressure in the skull. If this condition is not diagnosed early, it can lead to death. Magnetic Resonance Imaging (MRI) is a diagnostic method frequently used in soft tissues with successful results. This study presented a new deep learning-based approach, which automatically detects brain tumors using Magnetic Resonance (MR) images. Convolutional and fully connected layers of a new Residual-CNN (R-CNN) model trained from scratch were used to extract deep features from MR images. The representation power of the deep feature set was increased with the features extracted from all convolutional layers. Among the deep features extracted, the 100 features with the highest distinctiveness were selected with a new multi-level feature selection algorithm named L1NSR. The best performance in the classification stage was obtained by using the SVM algorithm with the Gaussian kernel. The proposed approach was evaluated on two separate data sets composed of 2-class (healthy and tumor) and 4-class (glioma tumor, meningioma tumor, pituitary tumor, and healthy) datasets. Besides, the proposed approach was compared with other state-of-the-art approaches using the respective datasets. The best classification accuracies for 2-class and 4-class datasets were 98.8% and 96.6%, respectively.
TL;DR: In this article , a new deep learning method was proposed to estimate the corneal parameters (endothelial cell density, coefficient of variation, and hexagonality) from specular images.
Abstract: Corneal guttae, which are the abnormal growth of extracellular matrix in the corneal endothelium, are observed in specular images as black droplets that occlude the endothelial cells. To estimate the corneal parameters (endothelial cell density [ECD], coefficient of variation [CV], and hexagonality [HEX]), we propose a new deep learning method that includes a novel attention mechanism (named fNLA), which helps to infer the cell edges in the occluded areas. The approach first derives the cell edges, then infers the well-detected cells, and finally employs a postprocessing method to fix mistakes. This results in a binary segmentation from which the corneal parameters are estimated. We analyzed 1203 images (500 contained guttae) obtained with a Topcon SP-1P microscope. To generate the ground truth, we performed manual segmentation in all images. Several networks were evaluated (UNet, ResUNeXt, DenseUNets, UNet++, etc.) and we found that DenseUNets with fNLA provided the lowest error: a mean absolute error of 23.16 [cells/mm[Formula: see text]] in ECD, 1.28 [%] in CV, and 3.13 [%] in HEX. Compared with Topcon's built-in software, our error was 3-6 times smaller. Overall, our approach handled notably well the cells affected by guttae, detecting cell edges partially occluded by small guttae and discarding large areas covered by extensive guttae.
TL;DR: This paper proposes a hybrid approach to Image Region Extraction that focuses on automated region proposal and segmentation techniques and analyzes popular techniques such as K-Means Clustering and Watershedding and their effectiveness when deployed in a hybrid environment to be applied to a highly variable dataset.
Abstract: With a wide range of applications, image segmentation is a complex and difficult preprocessing step that plays an important role in automatic visual systems, which accuracy impacts, not only on segmentation results, but directly affects the effectiveness of the follow-up tasks. Despite the many advances achieved in the last decades, image segmentation remains a challenging problem, particularly, the segmenting of color images due to the diverse inhomogeneities of color, textures and shapes present in the descriptive features of the images. In trademark graphic images segmentation, beyond these difficulties, we must also take into account the high noise and low resolution, which are often present. Trademark graphic images can also be very heterogeneous with regard to the elements that make them up, which can be overlapping and with varying lighting conditions. Due to the immense variation encountered in corporate logos and trademark graphic images, it is often difficult to select a single method for extracting relevant image regions in a way that produces satisfactory results. Many of the hybrid approaches that integrate the Watershed and K-Means algorithms involve processing very high quality and visually similar images, such as medical images, meaning that either approach can be tweaked to work on images that follow a certain pattern. Trademark images are totally different from each other and are usually fully colored. Our system solves this difficulty given it is a generalized implementation designed to work in most scenarios, through the use of customizable parameters and completely unbiased for an image type. In this paper, we propose a hybrid approach to Image Region Extraction that focuses on automated region proposal and segmentation techniques. In particular, we analyze popular techniques such as K-Means Clustering and Watershedding and their effectiveness when deployed in a hybrid environment to be applied to a highly variable dataset. The proposed system consists of a multi-stage algorithm that takes as input an RGB image and produces multiple outputs, corresponding to the extracted regions. After preprocessing steps, a K-Means function with random initial centroids and a user-defined value for k is executed over the RGB image, generating a gray-scale segmented image, to which a threshold method is applied to generate a binary mask, containing the necessary information to generate a distance map. Then, the Watershed function is performed over the distance map, using the markers defined by the Connected Component Analysis function that labels regions on 8-way pixel connectivity, ensuring that all regions are correctly found. Finally, individual objects are labelled for extraction through a contour method, based on border following. The achieved results show adequate region extraction capabilities when processing graphical images from different datasets, where the system correctly distinguishes the most relevant visual elements of images with minimal tweaking.
TL;DR: Wang et al. as discussed by the authors used an attention-based deep neural network (ADCNN-32s-G) model to segment brain fMRI dataset, which performed well in segmenting mass fMRI datasets.
Abstract: Functional magnetic resonance imaging (fMRI) is widely used for clinical examinations, diagnosis, and treatment. By segmenting fMRI images, large-scale medical image data can be processed more efficiently. Most deep learning (DL)-based segmentation typically uses some type of encoding–decoding model. In this study, affective computing (AC) was developed using the brain fMRI dataset generated from an emotion simulation experiment. The brain fMRI dataset was segmented using an attention model, a deep convolutional neural network-32 (DCNN-32) based on Laplacian of Gaussian (LoG) filter, called ADCNN-32-G. For the evaluation of image segmentation, several indices are presented. By comparing the proposed ADCNN-32s-G model to distance regularized level set evolution (DRLSE), single-seeded region growing, and the single segNet full convolutional network model (FCN), the proposed model performs well in segmenting mass fMRI datasets. The proposed method can be applied to the real-time monitoring of patients with depression, and it can effectively advise human mental health. • Designed brain emotional stimuli experiment. • Used fMRI to explore emotion status of brain. • Developed an attention-based deep neural network to segment brain fMRI dataset. • Decreased computational complexity facing mass medical images dataset in emotion recognition research. • Refers potential applications for the proposed methods in brain research science.