scispace - formally typeset
Search or ask a question
Author

S. M. Reza Soroushmehr

Bio: S. M. Reza Soroushmehr is an academic researcher from University of Michigan. The author has contributed to research in topics: Convolutional neural network & Segmentation. The author has an hindex of 7, co-authored 13 publications receiving 528 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: Recent research which targets utilization of large volumes of medical data while combining multimodal data from disparate sources is discussed and potential areas of research within this field which have the ability to provide meaningful impact on healthcare delivery are examined.
Abstract: The rapidly expanding field of big data analytics has started to play a pivotal role in the evolution of healthcare practices and research. It has provided tools to accumulate, manage, analyze, and assimilate large volumes of disparate, structured, and unstructured data produced by current healthcare systems. Big data analytics has been recently applied towards aiding the process of care delivery and disease exploration. However, the adoption rate and research development in this space is still hindered by some fundamental problems inherent within the big data paradigm. In this paper, we discuss some of these major challenges with a focus on three upcoming and promising areas of medical research: image, signal, and genomics based analytics. Recent research which targets utilization of large volumes of medical data while combining multimodal data from disparate sources is discussed. Potential areas of research within this field which have the ability to provide meaningful impact on healthcare delivery are also examined.

480 citations

Proceedings ArticleDOI
01 Jul 2018
TL;DR: Wang et al. as discussed by the authors proposed a polyp segmentation method based on the convolutional neural network, which performed a novel image patch selection method in the training phase of the network and performed effective post-processing on the probability map that is produced by the network.
Abstract: Colorectal cancer is one of the highest causes of cancer-related death, especially in men. Polyps are one of the main causes of colorectal cancer, and early diagnosis of polyps by colonoscopy could result in successful treatment. Diagnosis of polyps in colonoscopy videos is a challenging task due to variations in the size and shape of polyps. In this paper, we proposed a polyp segmentation method based on the convolutional neural network. Two strategies enhance the performance of the method. First, we perform a novel image patch selection method in the training phase of the network. Second, in the test phase, we perform effective post-processing on the probability map that is produced by the network. Evaluation of the proposed method using the CVC-ColonDB database shows that our proposed method achieves more accurate results in comparison with previous colonoscopy video-segmentation methods.

144 citations

Proceedings ArticleDOI
01 Jul 2018
TL;DR: In this article, the authors proposed an automated method for segmenting the left ventricle in cardiac MR images, which automatically extracted the region of interest and then employed it as an input of a fully convolutional network.
Abstract: Medical image analysis, especially segmenting a specific organ, has an important role in developing clinical decision support systems. In cardiac magnetic resonance (MR) imaging, segmenting the left and right ventricles helps physicians diagnose different heart abnormalities. There are challenges for this task, including the intensity and shape similarity between the left ventricle and other organs, inaccurate boundaries, and presence of noise in most of the images. In this paper, we propose an automated method for segmenting the left ventricle in cardiac MR images. We first automatically extract the region of interest and then employ it as an input of a fully convolutional network. We train the network accurately despite the small number of left ventricle pixels in comparison with the whole image. Thresholding on the output map of the fully convolutional network and selection of regions based on their roundness are performed in our proposed post-processing phase. The Dice score of our method reaches 87.24% by applying this algorithm on the York dataset of heart images.

30 citations

Posted Content
TL;DR: This paper proposes an automated method for segmenting the left ventricle in cardiac MR images by first automatically extracting the region of interest and then employing it as an input of a fully convolutional network.
Abstract: Medical image analysis, especially segmenting a specific organ, has an important role in developing clinical decision support systems. In cardiac magnetic resonance (MR) imaging, segmenting the left and right ventricles helps physicians diagnose different heart abnormalities. There are challenges for this task, including the intensity and shape similarity between left ventricle and other organs, inaccurate boundaries and presence of noise in most of the images. In this paper we propose an automated method for segmenting the left ventricle in cardiac MR images. We first automatically extract the region of interest, and then employ it as an input of a fully convolutional network. We train the network accurately despite the small number of left ventricle pixels in comparison with the whole image. Thresholding on the output map of the fully convolutional network and selection of regions based on their roundness are performed in our proposed post-processing phase. The Dice score of our method reaches 87.24% by applying this algorithm on the York dataset of heart images.

24 citations

Journal ArticleDOI
TL;DR: In this paper, a convolutional neural network, AngioNet, is proposed for vessel segmentation in X-ray angiography images, which significantly improves segmentation performance on multiple network backbones, with the best performance using Deeplabv3+ (Dice score 0.864, pixel accuracy 0.983, sensitivity 0.918, specificity 0.987).
Abstract: Coronary Artery Disease (CAD) is commonly diagnosed using X-ray angiography, in which images are taken as radio-opaque dye is flushed through the coronary vessels to visualize the severity of vessel narrowing, or stenosis. Cardiologists typically use visual estimation to approximate the percent diameter reduction of the stenosis, and this directs therapies like stent placement. A fully automatic method to segment the vessels would eliminate potential subjectivity and provide a quantitative and systematic measurement of diameter reduction. Here, we have designed a convolutional neural network, AngioNet, for vessel segmentation in X-ray angiography images. The main innovation in this network is the introduction of an Angiographic Processing Network (APN) which significantly improves segmentation performance on multiple network backbones, with the best performance using Deeplabv3+ (Dice score 0.864, pixel accuracy 0.983, sensitivity 0.918, specificity 0.987). The purpose of the APN is to create an end-to-end pipeline for image pre-processing and segmentation, learning the best possible pre-processing filters to improve segmentation. We have also demonstrated the interchangeability of our network in measuring vessel diameter with Quantitative Coronary Angiography. Our results indicate that AngioNet is a powerful tool for automatic angiographic vessel segmentation that could facilitate systematic anatomical assessment of coronary stenosis in the clinical workflow.

23 citations


Cited by
More filters
Journal Article
TL;DR: This volume is keyed to high resolution electron microscopy, which is a sophisticated form of structural analysis, but really morphology in a modern guise, the physical and mechanical background of the instrument and its ancillary tools are simply and well presented.
Abstract: I read this book the same weekend that the Packers took on the Rams, and the experience of the latter event, obviously, colored my judgment. Although I abhor anything that smacks of being a handbook (like, \"How to Earn a Merit Badge in Neurosurgery\") because too many volumes in biomedical science already evince a boyscout-like approach, I must confess that parts of this volume are fast, scholarly, and significant, with certain reservations. I like parts of this well-illustrated book because Dr. Sj6strand, without so stating, develops certain subjects on technique in relation to the acquisition of judgment and sophistication. And this is important! So, given that the author (like all of us) is somewhat deficient in some areas, and biased in others, the book is still valuable if the uninitiated reader swallows it in a general fashion, realizing full well that what will be required from the reader is a modulation to fit his vision, propreception, adaptation and response, and the kind of problem he is undertaking. A major deficiency of this book is revealed by comparison of its use of physics and of chemistry to provide understanding and background for the application of high resolution electron microscopy to problems in biology. Since the volume is keyed to high resolution electron microscopy, which is a sophisticated form of structural analysis, but really morphology in a modern guise, the physical and mechanical background of The instrument and its ancillary tools are simply and well presented. The potential use of chemical or cytochemical information as it relates to biological fine structure , however, is quite deficient. I wonder when even sophisticated morphol-ogists will consider fixation a reaction and not a technique; only then will the fundamentals become self-evident and predictable and this sine qua flon will become less mystical. Staining reactions (the most inadequate chapter) ought to be something more than a technique to selectively enhance contrast of morphological elements; it ought to give the structural addresses of some of the chemical residents of cell components. Is it pertinent that auto-radiography gets singled out for more complete coverage than other significant aspects of cytochemistry by a high resolution microscopist, when it has a built-in minimal error of 1,000 A in standard practice? I don't mean to blind-side (in strict football terminology) Dr. Sj6strand's efforts for what is \"routinely used in our laboratory\"; what is done is usually well done. It's just that …

3,197 citations

Journal ArticleDOI
TL;DR: To provide relevant solutions for improving public health, healthcare providers are required to be fully equipped with appropriate infrastructure to systematically generate and analyze big data.
Abstract: ‘Big data’ is massive amounts of information that can work wonders. It has become a topic of special interest for the past two decades because of a great potential that is hidden in it. Various public and private sector industries generate, store, and analyze big data with an aim to improve the services they provide. In the healthcare industry, various sources for big data include hospital records, medical records of patients, results of medical examinations, and devices that are a part of internet of things. Biomedical research also generates a significant portion of big data relevant to public healthcare. This data requires proper management and analysis in order to derive meaningful information. Otherwise, seeking solution by analyzing big data quickly becomes comparable to finding a needle in the haystack. There are various challenges associated with each step of handling big data which can only be surpassed by using high-end computing solutions for big data analysis. That is why, to provide relevant solutions for improving public health, healthcare providers are required to be fully equipped with appropriate infrastructure to systematically generate and analyze big data. An efficient management, analysis, and interpretation of big data can change the game by opening new avenues for modern healthcare. That is exactly why various industries, including the healthcare industry, are taking vigorous steps to convert this potential into better services and financial advantages. With a strong integration of biomedical and healthcare data, modern healthcare organizations can possibly revolutionize the medical therapies and personalized medicine.

615 citations

Journal ArticleDOI
TL;DR: Medical imaging systems: Physical principles and image reconstruction algorithms for magnetic resonance tomography, ultrasound and computer tomography (CT), and applications: Image enhancement, image registration, functional magnetic resonance imaging (fMRI).

536 citations

Journal ArticleDOI
25 Jul 2018-Sensors
TL;DR: This paper reviews important aspects in the WHDs area, listing the state-of-the-art of wearable vital signs sensing technologies plus their system architectures and specifications plus a resumed evolution of these devices based on the prototypes developed along the years.
Abstract: Wearable Health Devices (WHDs) are increasingly helping people to better monitor their health status both at an activity/fitness level for self-health tracking and at a medical level providing more data to clinicians with a potential for earlier diagnostic and guidance of treatment. The technology revolution in the miniaturization of electronic devices is enabling to design more reliable and adaptable wearables, contributing for a world-wide change in the health monitoring approach. In this paper we review important aspects in the WHDs area, listing the state-of-the-art of wearable vital signs sensing technologies plus their system architectures and specifications. A focus on vital signs acquired by WHDs is made: first a discussion about the most important vital signs for health assessment using WHDs is presented and then for each vital sign a description is made concerning its origin and effect on heath, monitoring needs, acquisition methods and WHDs and recent scientific developments on the area (electrocardiogram, heart rate, blood pressure, respiration rate, blood oxygen saturation, blood glucose, skin perspiration, capnography, body temperature, motion evaluation, cardiac implantable devices and ambient parameters). A general WHDs system architecture is presented based on the state-of-the-art. After a global review of WHDs, we zoom in into cardiovascular WHDs, analysing commercial devices and their applicability versus quality, extending this subject to smart t-shirts for medical purposes. Furthermore we present a resumed evolution of these devices based on the prototypes developed along the years. Finally we discuss likely market trends and future challenges for the emerging WHDs area.

531 citations

Journal ArticleDOI
TL;DR: Experimental results showed that the proposed Correlation Matrix kNN (CM-kNN) classification was more accurate and efficient than existing kNN methods in data-mining applications, such as classification, regression, and missing data imputation.
Abstract: The K Nearest Neighbor (kNN) method has widely been used in the applications of data mining and machine learning due to its simple implementation and distinguished performance. However, setting all test data with the same k value in the previous kNN methods has been proven to make these methods impractical in real applications. This article proposes to learn a correlation matrix to reconstruct test data points by training data to assign different k values to different test data points, referred to as the Correlation Matrix kNN (CM-kNN for short) classification. Specifically, the least-squares loss function is employed to minimize the reconstruction error to reconstruct each test data point by all training data points. Then, a graph Laplacian regularizer is advocated to preserve the local structure of the data in the reconstruction process. Moreover, an e1-norm regularizer and an e2, 1-norm regularizer are applied to learn different k values for different test data and to result in low sparsity to remove the redundant/noisy feature from the reconstruction process, respectively. Besides for classification tasks, the kNN methods (including our proposed CM-kNN method) are further utilized to regression and missing data imputation. We conducted sets of experiments for illustrating the efficiency, and experimental results showed that the proposed method was more accurate and efficient than existing kNN methods in data-mining applications, such as classification, regression, and missing data imputation.

377 citations