scispace - formally typeset
Search or ask a question
Author

Leonardo S. Mattos

Bio: Leonardo S. Mattos is an academic researcher from Istituto Italiano di Tecnologia. The author has contributed to research in topics: Laser & Computer science. The author has an hindex of 18, co-authored 142 publications receiving 1409 citations. Previous affiliations of Leonardo S. Mattos include Carnegie Mellon University & North Carolina State University.


Papers
More filters
Journal ArticleDOI
TL;DR: No single segmentation approach is suitable for all the different anatomical region or imaging modalities, thus the primary goal of this review was to provide an up to date source of information about the state of the art of the vessel segmentation algorithms so that the most suitable methods can be chosen according to the specific task.

378 citations

Journal ArticleDOI
TL;DR: A novel computerized surgical system for improved usability, intuitiveness, accuracy, and controllability in robot‐assisted laser phonomicrosurgery is introduced.
Abstract: Objectives/Hypothesis To introduce a novel computerized surgical system for improved usability, intuitiveness, accuracy, and controllability in robot-assisted laser phonomicrosurgery. Study Design Pilot technology assessment. Methods The novel system was developed involving a newly designed motorized laser micromanipulator, a touch-screen display, and a graphics stylus. The system allows the control of a CO2 laser through interaction between the stylus and the live video of the surgical area. This empowers the stylus with the ability to have actual effect on the surgical site. Surgical enhancements afforded by this system were established through a pilot technology assessment using randomized trials comparing its performance with a state-of-the-art laser microsurgery system. Resident surgeons and medical students were chosen as subjects in performing sets of trajectory-following exercises. Image processing–based techniques were used for an objective performance assessment. A System Usability Scale–based questionnaire was used for the qualitative assessment. Results The computerized interface demonstrated superiority in usability, accuracy, and controllability over the state-of-the-art system. Significant ease of use and learning experienced by the subjects were demonstrated by the usability score assigned to the two compared interfaces: computerized interface = 83.96% versus state-of-the-art = 68.02%. The objective analysis showed a significant enhancement in accuracy and controllability: computerized interface = 90.02% versus state-of-the-art = 75.59%. Conclusions The novel system significantly enhances the accuracy, usability, and controllability in laser phonomicrosurgery. The design provides an opportunity to improve the ergonomics and safety of current surgical setups. Level of Evidence N/A Laryngoscope, 124:1887–1894, 2014

62 citations

Journal ArticleDOI
TL;DR: The use of texture-based machine-learning algorithms for early stage cancerous laryngeal tissue classification is investigated and the results are a promising step toward a helpful endoscope-integrated processing system to support early stage diagnosis.
Abstract: Early stage diagnosis of laryngeal squamous cell carcinoma (SCC) is of primary importance for lowering patient mortality or after treatment morbidity. Despite the challenges in diagnosis reported in the clinical literature, few efforts have been invested in computer-assisted diagnosis. The objective of this paper is to investigate the use of texture-based machine-learning algorithms for early stage cancerous laryngeal tissue classification. To estimate the classification reliability, a measure of confidence is also exploited. From the endoscopic videos of 33 patients affected by SCC, a well-balanced dataset of 1320 patches, relative to four laryngeal tissue classes, was extracted. With the best performing feature, the achieved median classification recall was 93% [interquartile range [Formula: see text]]. When excluding low-confidence patches, the achieved median recall was increased to 98% ([Formula: see text]), proving the high reliability of the proposed approach. This research represents an important advancement in the state-of-the-art computer-assisted laryngeal diagnosis, and the results are a promising step toward a helpful endoscope-integrated processing system to support early stage diagnosis.

54 citations

Journal ArticleDOI
TL;DR: A set of categories to guide and compare at a glance different methodologies used by researchers to collect affect-related data for real-life EMSR-based applications and a visual tool called GARAFED to compare existing physiological datasets for EMSR in the wild are introduced.
Abstract: Emotion, mood, and stress recognition (EMSR) has been studied in laboratory settings for decades. In particular, physiological signals are widely used to detect and classify affective states in lab conditions. However, physiological reactions to emotional stimuli have been found to differ in laboratory and natural settings. Thanks to recent technological progress (e.g., in wearables) the creation of EMSR systems for a large number of consumers during their everyday activities is increasingly possible. Therefore, datasets created in the wild are needed to insure the validity and the exploitability of EMSR models for real-life applications. In this paper, we initially present common techniques used in laboratory settings to induce emotions for the purpose of physiological dataset creation. Next, advantages and challenges of data collection in the wild are discussed. To assess the applicability of existing datasets to real-life applications, we propose a set of categories to guide and compare at a glance different methodologies used by researchers to collect such data. For this purpose, we also introduce a visual tool called Graphical Assessment of Real-life Application-Focused Emotional Dataset (GARAFED). In the last part of the paper, we apply the proposed tool to compare existing physiological datasets for EMSR in the wild and to show possible improvements and future directions of research. We wish for this paper and GARAFED to be used as guidelines for researchers and developers who aim at collecting affect-related data for real-life EMSR-based applications.

52 citations

Journal ArticleDOI
TL;DR: This paper significantly enhances the state of art in automatic labeling of endoscopic videos by introducing the use of the confidence metric, and by being the first study to use MI data for in vivo laparoscopic tissue classification.
Abstract: Objective: Surgical data science is evolving into a research field that aims to observe everything occurring within and around the treatment process to provide situation-aware data-driven assistance. In the context of endoscopic video analysis, the accurate classification of organs in the field of view of the camera proffers a technical challenge. Herein, we propose a new approach to anatomical structure classification and image tagging that features an intrinsic measure of confidence to estimate its own performance with high reliability and which can be applied to both RGB and multispectral imaging (MI) data. Methods: Organ recognition is performed using a superpixel classification strategy based on textural and reflectance information. Classification confidence is estimated by analyzing the dispersion of class probabilities. Assessment of the proposed technology is performed through a comprehensive in vivo study with seven pigs. Results: When applied to image tagging, mean accuracy in our experiments increased from 65% (RGB) and 80% (MI) to 90% (RGB) and 96% (MI) with the confidence measure. Conclusion: Results showed that the confidence measure had a significant influence on the classification accuracy, and MI data are better suited for anatomical structure labeling than RGB data. Significance: This paper significantly enhances the state of art in automatic labeling of endoscopic videos by introducing the use of the confidence metric, and by being the first study to use MI data for in vivo laparoscopic tissue classification. The data of our experiments will be released as the first in vivo MI dataset upon publication of this paper.

51 citations


Cited by
More filters
Reference EntryDOI
15 Oct 2004

2,118 citations

Journal ArticleDOI
TL;DR: In this article, the authors provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis, and provide a starting point for people interested in experimenting and perhaps contributing to the field of machine learning for medical imaging.
Abstract: What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of machine learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.

991 citations

Journal ArticleDOI
TL;DR: This study reviews recent advances in UQ methods used in deep learning and investigates the application of these methods in reinforcement learning (RL), and outlines a few important applications of UZ methods.
Abstract: Uncertainty quantification (UQ) plays a pivotal role in reduction of uncertainties during both optimization and decision making processes. It can be applied to solve a variety of real-world applications in science and engineering. Bayesian approximation and ensemble learning techniques are two most widely-used UQ methods in the literature. In this regard, researchers have proposed different UQ methods and examined their performance in a variety of applications such as computer vision (e.g., self-driving cars and object detection), image processing (e.g., image restoration), medical image analysis (e.g., medical image classification and segmentation), natural language processing (e.g., text classification, social media texts and recidivism risk-scoring), bioinformatics, etc. This study reviews recent advances in UQ methods used in deep learning. Moreover, we also investigate the application of these methods in reinforcement learning (RL). Then, we outline a few important applications of UQ methods. Finally, we briefly highlight the fundamental research challenges faced by UQ methods and discuss the future research directions in this field.

809 citations

Patent
31 Aug 2012
TL;DR: In this article, a tracking device is attached to the hand-held portion for tracking the instrument and a control system is used to keep the working portion within or outside of a boundary.
Abstract: An instrument for treating tissue during a medical procedure includes a hand-held portion and a working portion. The hand-held portion is manually supported and moved by a user and the working portion is movably coupled to the hand-held portion. A tracking device is attached to the hand-held portion for tracking the instrument. The tracking device is in communication with a control system, which is used to keep the working portion within or outside of a boundary. A plurality of actuators are operatively coupled to the working portion. The control system instructs the actuators to move the working portion relative to the hand-held portion during the medical procedure in order to maintain a desired relationship between the working portion and the boundary.

597 citations

Journal ArticleDOI
TL;DR: This paper indicates how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction, and provides a starting point for people interested in experimenting and contributing to the field of deep learning for medical imaging.
Abstract: What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of deep learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.

590 citations