scispace - formally typeset
Search or ask a question
Author

Tao Tan

Bio: Tao Tan is an academic researcher from Eindhoven University of Technology. The author has contributed to research in topics: Computer science & Medicine. The author has an hindex of 13, co-authored 42 publications receiving 596 citations.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: The proposed MCDnCNN model has been demonstrated to robustly denoise three dimensional MR images with Rician noise to show the most robust denoising performance in all three datasets.
Abstract: To test if the proposed deep learning based denoising method denoising convolutional neural networks (DnCNN) with residual learning and multi-channel strategy can denoise three dimensional MR images with Rician noise robustly. Multi-channel DnCNN (MCDnCNN) method with two training strategies was developed to denoise MR images with and without a specific noise level, respectively. To evaluate our method, three datasets from two public data sources of IXI dataset and Brainweb, including T1 weighted MR images acquired at 1.5 and 3 T as well as MR images simulated with a widely used MR simulator, were randomly selected and artificially added with different noise levels ranging from 1 to 15%. For comparison, four other state-of-the-art denoising methods were also tested using these datasets. In terms of the highest peak-signal-to-noise-ratio and global of structure similarity index, our proposed MCDnCNN model for a specific noise level showed the most robust denoising performance in all three datasets. Next to that, our general noise-applicable model also performed better than the rest four methods in two datasets. Furthermore, our training model showed good general applicability. Our proposed MCDnCNN model has been demonstrated to robustly denoise three dimensional MR images with Rician noise.

153 citations

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed automatic seizure detection method based on the MRBF networks and the Fisher vector encoding is a powerful tool in detecting epileptic seizures.
Abstract: Detecting epileptic seizures in electroencephalography (EEG) signals is a challenging task due to nonstationary processes of brain activities. Currently, the epilepsy is mainly detected by clinicians based on visual observation of EEG recordings, which is generally time consuming and sensitive to bias. This paper presents a novel automatic seizure detection method based on the multiscale radial basis function (MRBF) networks and the Fisher vector (FV) encoding. Specifically, the MRBF networks are first used to obtain high-resolution time-frequency (TF) images for feature extraction, where both a modified particle swarm optimization (MPSO) method and an orthogonal least squares (OLS) algorithm are implemented to determine optimal scales and detect a parsimonious model structure. Gray level co-occurrence matrix (GLCM) texture descriptors and the FV, which contribute to high-dimensional vectors, are then adopted to achieve discriminative features based on five frequency subbands of clinical interests from TF images. Furthermore, the dimensionality of the original feature space can be effectively reduced by the t-test statistical tool before feeding compact features into the SVM classifier for seizure detection. Finally, the classification performance of the proposed method is evaluated by using two widely used EEG database, and is observed to provide good classification accuracy on both datasets. Experimental results demonstrate that our proposed method is a powerful tool in detecting epileptic seizures.

97 citations

Posted Content
TL;DR: In this article, a ten convolutional layers neural network combined with residual learning and multi-channel strategy was proposed to denoising MRI Rician noise using a convolution neural network.
Abstract: The denoising of magnetic resonance (MR) images is a task of great importance for improving the acquired image quality. Many methods have been proposed in the literature to retrieve noise free images with good performances. Howerever, the state-of-the-art denoising methods, all needs a time-consuming optimization processes and their performance strongly depend on the estimated noise level parameter. Within this manuscript we propose the idea of denoising MRI Rician noise using a convolutional neural network. The advantage of the proposed methodology is that the learning based model can be directly used in the denosing process without optimization and even without the noise level parameter. Specifically, a ten convolutional layers neural network combined with residual learning and multi-channel strategy was proposed. Two training ways: training on a specific noise level and training on a general level were conducted to demonstrate the capability of our methods. Experimental results over synthetic and real 3D MR data demonstrate our proposed network can achieve superior performance compared with other methods in term of both of the peak signal to noise ratio and the global of structure similarity index. Without noise level parameter, our general noise-applicable model is also better than the other compared methods in two datasets. Furthermore, our training model shows good general applicability.

66 citations

Journal ArticleDOI
TL;DR: With the use of this Computer Aided Detection (CAD) system, the reading time was substantially reduced from one hour to 13 minutes per patient, because the CAD system only detects on average 25.9 false positives per TBI patient, resulting in 0.29false positives per definite CMB finding.

63 citations

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a computer-aided diagnosis (CAD) system for lung diseases, including cancers and tuberculosis (TB), based on transfer learning (TL), and proposed a novel TL method on the top of DenseNet.
Abstract: Bronchoscopy inspection, as a follow-up procedure next to the radiological imaging, plays a key role in the diagnosis and treatment design for lung disease patients. When performing bronchoscopy, doctors have to make a decision immediately whether to perform a biopsy. Because biopsies may cause uncontrollable and life-threatening bleeding of the lung tissue, thus doctors need to be selective with biopsies. In this paper, to help doctors to be more selective on biopsies and provide a second opinion on diagnosis, we propose a computer-aided diagnosis (CAD) system for lung diseases, including cancers and tuberculosis (TB). Based on transfer learning (TL), we propose a novel TL method on the top of DenseNet: sequential fine-tuning (SFT). Compared with traditional fine-tuning (FT) methods, our method achieves the best performance. In a data set of recruited 81 normal cases, 76 TB cases and 277 lung cancer cases, SFT provided an overall accuracy of 82% while other traditional TL methods achieved an accuracy from 70% to 74%. The detection accuracy of SFT for cancers, TB, and normal cases are 87%, 54%, and 91%, respectively. This indicates that the CAD system has the potential to improve lung disease diagnosis accuracy in bronchoscopy and it may be used to be more selective with biopsies.

45 citations


Cited by
More filters
Journal Article

849 citations

Journal ArticleDOI
TL;DR: This open-source population-based optimization technique called Hunger Games Search is designed to be a standard tool for optimization in different areas of artificial intelligence and machine learning with several new exploratory and exploitative features, high performance, and high optimization capacity.
Abstract: A recent set of overused population-based methods have been published in recent years. Despite their popularity, most of them have uncertain, immature performance, partially done verifications, similar overused metaphors, similar immature exploration and exploitation components and operations, and an insecure tradeoff between exploration and exploitation trends in most of the new real-world cases. Therefore, all users need to extensively modify and adjust their operations based on main evolutionary methods to reach faster convergence, more stable balance, and high-quality results. To move the optimization community one step ahead toward more focus on performance rather than change of metaphor, a general-purpose population-based optimization technique called Hunger Games Search (HGS) is proposed in this research with a simple structure, special stability features and very competitive performance to realize the solutions of both constrained and unconstrained problems more effectively. The proposed HGS is designed according to the hunger-driven activities and behavioural choice of animals. This dynamic, fitness-wise search method follows a simple concept of “Hunger” as the most crucial homeostatic motivation and reason for behaviours, decisions, and actions in the life of all animals to make the process of optimization more understandable and consistent for new users and decision-makers. The Hunger Games Search incorporates the concept of hunger into the feature process; in other words, an adaptive weight based on the concept of hunger is designed and employed to simulate the effect of hunger on each search step. It follows the computationally logical rules (games) utilized by almost all animals and these rival activities and games are often adaptive evolutionary by securing higher chances of survival and food acquisition. This method's main feature is its dynamic nature, simple structure, and high performance in terms of convergence and acceptable quality of solutions, proving to be more efficient than the current optimization methods. The effectiveness of HGS was verified by comparing HGS with a comprehensive set of popular and advanced algorithms on 23 well-known optimization functions and the IEEE CEC 2014 benchmark test suite. Also, the HGS was applied to several engineering problems to demonstrate its applicability. The results validate the effectiveness of the proposed optimizer compared to popular essential optimizers, several advanced variants of the existing methods, and several CEC winners and powerful differential evolution (DE)-based methods abbreviated as LSHADE, SPS_L_SHADE_EIG, LSHADE_cnEpSi, SHADE, SADE, MPEDE, and JDE methods in handling many single-objective problems. We designed this open-source population-based method to be a standard tool for optimization in different areas of artificial intelligence and machine learning with several new exploratory and exploitative features, high performance, and high optimization capacity. The method is very flexible and scalable to be extended to fit more form of optimization cases in both structural aspects and application sides. This paper's source codes, supplementary files, Latex and office source files, sources of plots, a brief version and pseudocode, and an open-source software toolkit for solving optimization problems with Hunger Games Search and online web service for any question, feedback, suggestion, and idea on HGS algorithm will be available to the public at https://aliasgharheidari.com/HGS.html .

529 citations

Journal ArticleDOI
TL;DR: A comparative study of deep techniques in image denoising by classifying the deep convolutional neural networks for additive white noisy images, the deep CNNs for real noisy images; the deepCNNs for blind Denoising and the deep network for hybrid noisy images.

518 citations

Journal ArticleDOI
TL;DR: A narrative literature review examines the numerous developments and breakthroughs in the U-net architecture and provides observations on recent trends, and discusses the many innovations that have advanced in deep learning and how these tools facilitate U-nets.
Abstract: U-net is an image segmentation technique developed primarily for image segmentation tasks. These traits provide U-net with a high utility within the medical imaging community and have resulted in extensive adoption of U-net as the primary tool for segmentation tasks in medical imaging. The success of U-net is evident in its widespread use in nearly all major image modalities, from CT scans and MRI to X-rays and microscopy. Furthermore, while U-net is largely a segmentation tool, there have been instances of the use of U-net in other applications. Given that U-net’s potential is still increasing, this narrative literature review examines the numerous developments and breakthroughs in the U-net architecture and provides observations on recent trends. We also discuss the many innovations that have advanced in deep learning and discuss how these tools facilitate U-net. In addition, we review the different image modalities and application areas that have been enhanced by U-net.

425 citations