scispace - formally typeset
Search or ask a question

Showing papers by "Rituparna Chaki published in 2020"


Journal ArticleDOI
TL;DR: An efficient denoising framework for reducing the noise level of brain PET images based on the combination of multi-scale transform (wavelet and curvelet) and tree clustering non-local means (TNLM).
Abstract: The diagnosis of dementia, particularly in the early stages is very much helpful with Positron emission tomography (PET) image processing. The most important challenges in PET image processing are noise removal and region of interests (ROIs) segmentation. Although denoising and segmentation are performed independently, but the performance of the denoising process significantly affects the performance of the segmentation process. Due to the low signals to noise ratio and low contrast, PET image denoising is a challenging task. Individual wavelet, curvelet and non-local means (NLM) based methods are not well suited to handle both isotropic (smooth details) and anisotropic (edges and curves) features due to its restricted abilities. To address these issues, the present work proposes an efficient denoising framework for reducing the noise level of brain PET images based on the combination of multi-scale transform (wavelet and curvelet) and tree clustering non-local means (TNLM). The main objective of the proposed method is to extract the isotropic features from a noisy smooth PET image using tree clustering based non-local means (TNLM). Then curvelet-based denoising is applied to the residual image to extract the anisotropic features such as edges and curves. Finally, the extracted anisotropic features are inserted back into the isotropic features to obtain an estimated denoised image. Simulated phantom and clinical PET datasets have been used in this proposed work for testing and measuring the performance in the medical applications, such as gray matter segmentation and precise tumor region identification without any interaction with other structural images like MRI or CT. The results in the experimental section show that the proposed denoising method has obtained better performance than existing wavelet, curvelet, wavelet-curvelet, non-local means (NLM) and deep learning methods based on the preservation of the edges. Qualitatively, a notable gain is achieved in the proposed denoised PET images in terms of contrast enhancement than other existing denoising methods.

11 citations


Book ChapterDOI
01 Jan 2020
TL;DR: A patch-based automated segmentation of brain tumor is proposed using a deep convolutional neural network with small Convolutional kernels and leaky rectifier linear units (LReLU) as an activation function and promising results are obtained depending on the ground truth.
Abstract: Segmentation of brain tumor is a very crucial task from the medical points of view, such as in surgery and treatment planning. The tumor can be noticeable at any region of the brain with various size and shape due to its nature, that makes the segmentation task more difficult. In this present work, we propose a patch-based automated segmentation of brain tumor using a deep convolutional neural network with small convolutional kernels and leaky rectifier linear units (LReLU) as an activation function. Present work efficiently segments multi-modalities magnetic resonance (MR) brain images into normal and tumor tissues. The presence of small convolutional kernels allow more layers to form a deeper architecture and less number of the kernel weights in each layer during training. Leaky rectifier linear unit (LReLU) solves the problem of rectifier linear unit (ReLU) and increases the speed of the training process. The present work can deal with both high- and low-grade tumor regions on MR images. BraTS 2015 dataset has been used in the present work as a standard benchmark dataset. The presented network takes T1, T2, T1c, and FLAIR MR images from each subject as inputs and produces the segmented labels as outputs. It is experimentally observed that the present work has obtained promising results than the existing algorithms depending on the ground truth.

1 citations


Book ChapterDOI
TL;DR: In this paper, the authors proposed a framework for efficient distribution of bandwidth over a region based on depth of field analysis and population statistics analysis, which aims to reduce packet loss and distortion effects due to scattering and refraction.
Abstract: IoT is made up of heterogeneous networks which transport a huge volume of data packets over the Internet. Improper utilization of bandwidth or insufficient bandwidth allocation leads to faults such as packet loss, setting up routing path between source and destination, reduction of speed in data communication, etc. One of the vital causes of insufficient bandwidth is nonuniform growth in the number of Internet users in a specific region. In this paper, we propose a framework for efficient distribution of bandwidth over a region based on depth of field analysis and population statistics analysis. We propose to use existing Google Earth Pro APIs over satellite images to estimate possible number of users in a particular area and plan to allocate bandwidth accordingly. The proposed framework is aimed to reduce packet loss and distortion effects due to scattering and refraction.