scispace - formally typeset
Search or ask a question
Author

Masoud Nosrati

Other affiliations: Islamic Azad University, Simon Fraser University, Huawei  ...read more
Bio: Masoud Nosrati is an academic researcher from Iowa State University. The author has contributed to research in topics: Image segmentation & Steganography. The author has an hindex of 17, co-authored 84 publications receiving 882 citations. Previous affiliations of Masoud Nosrati include Islamic Azad University & Simon Fraser University.


Papers
More filters
Journal ArticleDOI
TL;DR: The first Overlapping Cervical Cytology Image Segmentation Challenge as discussed by the authors was organized to encourage the development and benchmarking of techniques capable of segmenting individual cells from overlapping cellular clumps in cervical cytology images.
Abstract: In this paper, we introduce and evaluate the systems submitted to the first Overlapping Cervical Cytology Image Segmentation Challenge, held in conjunction with the IEEE International Symposium on Biomedical Imaging 2014. This challenge was organized to encourage the development and benchmarking of techniques capable of segmenting individual cells from overlapping cellular clumps in cervical cytology images, which is a prerequisite for the development of the next generation of computer-aided diagnosis systems for cervical cancer. In particular, these automated systems must detect and accurately segment both the nucleus and cytoplasm of each cell, even when they are clumped together and, hence, partially occluded. However, this is an unsolved problem due to the poor contrast of cytoplasm boundaries, the large variation in size and shape of cells, and the presence of debris and the large degree of cellular overlap. The challenge initially utilized a database of $16$ high-resolution ( $\times$ 40 magnification) images of complex cellular fields of view, in which the isolated real cells were used to construct a database of $945$ cervical cytology images synthesized with a varying number of cells and degree of overlap, in order to provide full access of the segmentation ground truth. These synthetic images were used to provide a reliable and comprehensive framework for quantitative evaluation on this segmentation problem. Results from the submitted methods demonstrate that all the methods are effective in the segmentation of clumps containing at most three cells, with overlap coefficients up to 0.3. This highlights the intrinsic difficulty of this challenge and provides motivation for significant future improvement.

117 citations

Posted Content
TL;DR: This survey focuses on optimization-based methods that incorporate prior information into their frameworks and reviews and compares these methods in terms of the types of prior employed, the domain of formulation, and the optimization techniques.
Abstract: Medical image segmentation, the task of partitioning an image into meaningful parts, is an important step toward automating medical image analysis and is at the crux of a variety of medical imaging applications, such as computer aided diagnosis, therapy planning and delivery, and computer aided interventions. However, the existence of noise, low contrast and objects' complexity in medical images are critical obstacles that stand in the way of achieving an ideal segmentation system. Incorporating prior knowledge into image segmentation algorithms has proven useful for obtaining more accurate and plausible results. This paper surveys the different types of prior knowledge that have been utilized in different segmentation frameworks. We focus our survey on optimization-based methods that incorporate prior information into their frameworks. We review and compare these methods in terms of the types of prior employed, the domain of formulation (continuous vs. discrete), and the optimization techniques (global vs. local). We also created an interactive online database of existing works and categorized them based on the type of prior knowledge they use. Our website is interactive so that researchers can contribute to keep the database up to date. We conclude the survey by discussing different aspects of designing an energy functional for image segmentation, open problems, and future perspectives.

66 citations

Journal Article
TL;DR: Features, basic concepts, algorithm and the approaches of each type of star algorithms is investigated separately in this paper.
Abstract: In this study, a branch of search algorithms that are called * (star) algorithms is taken to look. Star algorithms have different types and derivatives. They are A*, B*, D* (including original D*, Focused D* and D* Lite), IDA* and SMA*. Features, basic concepts, algorithm and the approaches of each type is investigated separately in this paper.

54 citations

Proceedings ArticleDOI
16 Apr 2015
TL;DR: A new continuous variational segmentation framework with star-shape prior using directional derivatives to segment overlapping cervical cells in Pap smear images is proposed and it is shown that the star- shape constraint better models the underlying problem and outperforms state-of-the-art methods in terms of accuracy and speed.
Abstract: Accurate and automatic detection and delineation of cervical cells are two critical precursor steps to automatic Pap smear image analysis and detecting pre-cancerous changes in the uterine cervix. To overcome noise and cell occlusion, many segmentation methods resort to incorporating shape priors, mostly enforcing elliptical shapes (e.g. [1]). However, elliptical shapes do not accurately model cervical cells. In this paper, we propose a new continuous variational segmentation framework with star-shape prior using directional derivatives to segment overlapping cervical cells in Pap smear images. We show that our star-shape constraint better models the underlying problem and outperforms state-of-the-art methods in terms of accuracy and speed.

54 citations

Journal ArticleDOI
TL;DR: This paper proposes a multi-modal approach to segmentation where preoperative 3D computed tomography scans and intraoperative stereo-endoscopic video data are jointly analyzed and estimates and track the pose of the preoperative models in 3D and consider the models' non-rigid deformations to match with corresponding visual cues in multi-channel endoscopic video and segment the objects of interest.
Abstract: In image-guided robotic surgery, segmenting the endoscopic video stream into meaningful parts provides important contextual information that surgeons can exploit to enhance their perception of the surgical scene. This information provides surgeons with real-time decision-making guidance before initiating critical tasks such as tissue cutting. Segmenting endoscopic video is a challenging problem due to a variety of complications including significant noise attributed to bleeding and smoke from cutting, poor appearance contrast between different tissue types, occluding surgical tools, and limited visibility of the objects’ geometries on the projected camera views. In this paper, we propose a multi-modal approach to segmentation where preoperative 3D computed tomography scans and intraoperative stereo-endoscopic video data are jointly analyzed. The idea is to segment multiple poorly visible structures in the stereo/multichannel endoscopic videos by fusing reliable prior knowledge captured from the preoperative 3D scans. More specifically, we estimate and track the pose of the preoperative models in 3D and consider the models’ non-rigid deformations to match with corresponding visual cues in multi-channel endoscopic video and segment the objects of interest. Further, contrary to most augmented reality frameworks in endoscopic surgery that assume known camera parameters, an assumption that is often violated during surgery due to non-optimal camera calibration and changes in camera focus/zoom, our method embeds these parameters into the optimization hence correcting the calibration parameters within the segmentation process. We evaluate our technique on synthetic data, ex vivo lamb kidney datasets, and in vivo clinical partial nephrectomy surgery with results demonstrating high accuracy and robustness.

47 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This work created a dataset of solar PV arrays to initiate and develop the process of automatically identifying solar PV locations using remote sensing imagery, and contains the geospatial coordinates and border vertices for over 19,000 solar panels across 601 high-resolution images from four cities in California.
Abstract: Earth-observing remote sensing data, including aerial photography and satellite imagery, offer a snapshot of the world from which we can learn about the state of natural resources and the built environment. The components of energy systems that are visible from above can be automatically assessed with these remote sensing data when processed with machine learning methods. Here, we focus on the information gap in distributed solar photovoltaic (PV) arrays, of which there is limited public data on solar PV deployments at small geographic scales. We created a dataset of solar PV arrays to initiate and develop the process of automatically identifying solar PV locations using remote sensing imagery. This dataset contains the geospatial coordinates and border vertices for over 19,000 solar panels across 601 high-resolution images from four cities in California. Dataset applications include training object detection and other machine learning algorithms that use remote sensing imagery, developing specific algorithms for predictive detection of distributed PV systems, estimating installed PV capacity, and analysis of the socioeconomic correlates of PV deployment. Machine-accessible metadata file describing the reported data (ISA-Tab format)

633 citations

Journal ArticleDOI
TL;DR: In this article, a generic training strategy that incorporates anatomical prior knowledge into CNNs through a new regularization model, which is trained end-to-end, encourages models to follow the global anatomical properties of the underlying anatomy via learnt non-linear representations of the shape.
Abstract: Incorporation of prior knowledge about organ shape and location is key to improve performance of image analysis approaches In particular, priors can be useful in cases where images are corrupted and contain artefacts due to limitations in image acquisition The highly constrained nature of anatomical objects can be well captured with learning-based techniques However, in most recent and promising techniques such as CNN-based segmentation it is not obvious how to incorporate such prior knowledge State-of-the-art methods operate as pixel-wise classifiers where the training objectives do not incorporate the structure and inter-dependencies of the output To overcome this limitation, we propose a generic training strategy that incorporates anatomical prior knowledge into CNNs through a new regularisation model, which is trained end-to-end The new framework encourages models to follow the global anatomical properties of the underlying anatomy ( eg shape, label structure) via learnt non-linear representations of the shape We show that the proposed approach can be easily adapted to different analysis tasks ( eg image enhancement, segmentation) and improve the prediction accuracy of the state-of-the-art models The applicability of our approach is shown on multi-modal cardiac data sets and public benchmarks In addition, we demonstrate how the learnt deep models of 3-D shapes can be interpreted and used as biomarkers for classification of cardiac pathologies

529 citations

Journal ArticleDOI
TL;DR: This work proposes a generic training strategy that incorporates anatomical prior knowledge into CNNs through a new regularisation model, which is trained end-to-end and demonstrates how the learnt deep models of 3-D shapes can be interpreted and used as biomarkers for classification of cardiac pathologies.
Abstract: Incorporation of prior knowledge about organ shape and location is key to improve performance of image analysis approaches. In particular, priors can be useful in cases where images are corrupted and contain artefacts due to limitations in image acquisition. The highly constrained nature of anatomical objects can be well captured with learning based techniques. However, in most recent and promising techniques such as CNN based segmentation it is not obvious how to incorporate such prior knowledge. State-of-the-art methods operate as pixel-wise classifiers where the training objectives do not incorporate the structure and inter-dependencies of the output. To overcome this limitation, we propose a generic training strategy that incorporates anatomical prior knowledge into CNNs through a new regularisation model, which is trained end-to-end. The new framework encourages models to follow the global anatomical properties of the underlying anatomy (e.g. shape, label structure) via learned non-linear representations of the shape. We show that the proposed approach can be easily adapted to different analysis tasks (e.g. image enhancement, segmentation) and improve the prediction accuracy of the state-of-the-art models. The applicability of our approach is shown on multi-modal cardiac datasets and public benchmarks. Additionally, we demonstrate how the learned deep models of 3D shapes can be interpreted and used as biomarkers for classification of cardiac pathologies.

482 citations

Journal ArticleDOI
TL;DR: This review categorizes the leading deep learning-based medical and non-medical image segmentation solutions into six main groups of deep architectural, data synthesis- based, loss function-based, sequenced models, weakly supervised, and multi-task methods.
Abstract: The semantic image segmentation task consists of classifying each pixel of an image into an instance, where each instance corresponds to a class. This task is a part of the concept of scene understanding or better explaining the global context of an image. In the medical image analysis domain, image segmentation can be used for image-guided interventions, radiotherapy, or improved radiological diagnostics. In this review, we categorize the leading deep learning-based medical and non-medical image segmentation solutions into six main groups of deep architectural, data synthesis-based, loss function-based, sequenced models, weakly supervised, and multi-task methods and provide a comprehensive review of the contributions in each of these groups. Further, for each group, we analyze each variant of these groups and discuss the limitations of the current approaches and present potential future research directions for semantic image segmentation.

398 citations

Journal ArticleDOI
TL;DR: This survey aims at providing an overview on the way machine learning has been used so far in the context of malware analysis in Windows environments, i.e. for the analysis of Portable Executables.

316 citations