scispace - formally typeset
Search or ask a question

Showing papers on "Image processing published in 2014"


Journal ArticleDOI
19 Jun 2014-PeerJ
TL;DR: The advantages of open source to achieve the goals of the scikit-image library are highlighted, and several real-world image processing applications that use scik it-image are showcased.
Abstract: scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

3,903 citations


Journal ArticleDOI
TL;DR: It is found that the pixel-wise gradient magnitude similarity (GMS) between the reference and distorted images combined with a novel pooling strategy-the standard deviation of the GMS map-can predict accurately perceptual image quality.
Abstract: It is an important task to faithfully evaluate the perceptual quality of output images in many applications, such as image compression, image restoration, and multimedia streaming. A good image quality assessment (IQA) model should not only deliver high quality prediction accuracy, but also be computationally efficient. The efficiency of IQA metrics is becoming particularly important due to the increasing proliferation of high-volume visual data in high-speed networks. We present a new effective and efficient IQA model, called gradient magnitude similarity deviation (GMSD). The image gradients are sensitive to image distortions, while different local structures in a distorted image suffer different degrees of degradations. This motivates us to explore the use of global variation of gradient based local quality map for overall image quality prediction. We find that the pixel-wise gradient magnitude similarity (GMS) between the reference and distorted images combined with a novel pooling strategy-the standard deviation of the GMS map-can predict accurately perceptual image quality. The resulting GMSD algorithm is much faster than most state-of-the-art IQA methods, and delivers highly competitive prediction accuracy. MATLAB source code of GMSD can be downloaded at http://www4.comp.polyu.edu.hk/~cslzhang/IQA/GMSD/GMSD.htm.

1,211 citations


Journal ArticleDOI
TL;DR: This article presents an overview of existing map processing techniques, bringing together the past and current research efforts in this interdisciplinary field, to characterize the advances that have been made, and to identify future research directions and opportunities.
Abstract: Maps depict natural and human-induced changes on earth at a fine resolution for large areas and over long periods of time. In addition, maps—especially historical maps—are often the only information source about the earth as surveyed using geodetic techniques. In order to preserve these unique documents, increasing numbers of digital map archives have been established, driven by advances in software and hardware technologies. Since the early 1980s, researchers from a variety of disciplines, including computer science and geography, have been working on computational methods for the extraction and recognition of geographic features from archived images of maps (digital map processing). The typical result from map processing is geographic information that can be used in spatial and spatiotemporal analyses in a Geographic Information System environment, which benefits numerous research fields in the spatial, social, environmental, and health sciences. However, map processing literature is spread across a broad range of disciplines in which maps are included as a special type of image. This article presents an overview of existing map processing techniques, with the goal of bringing together the past and current research efforts in this interdisciplinary field, to characterize the advances that have been made, and to identify future research directions and opportunities.

674 citations


Journal ArticleDOI
01 Aug 2014
TL;DR: The current comprehensive survey provides an overview of most of these published works by grouping them in a broad taxonomy, and common issues in super-resolution algorithms, such as imaging models and registration algorithms, optimization of the cost functions employed, dealing with color information, improvement factors, assessment of super- resolution algorithms, and the most commonly employed databases are discussed.
Abstract: Super-resolution, the process of obtaining one or more high-resolution images from one or more low-resolution observations, has been a very attractive research topic over the last two decades. It has found practical applications in many real-world problems in different fields, from satellite and aerial imaging to medical image processing, to facial image analysis, text image analysis, sign and number plates reading, and biometrics recognition, to name a few. This has resulted in many research papers, each developing a new super-resolution algorithm for a specific purpose. The current comprehensive survey provides an overview of most of these published works by grouping them in a broad taxonomy. For each of the groups in the taxonomy, the basic concepts of the algorithms are first explained and then the paths through which each of these groups have evolved are given in detail, by mentioning the contributions of different authors to the basic concepts of each group. Furthermore, common issues in super-resolution algorithms, such as imaging models and registration algorithms, optimization of the cost functions employed, dealing with color information, improvement factors, assessment of super-resolution algorithms, and the most commonly employed databases are discussed.

602 citations


Journal ArticleDOI
TL;DR: In this paper, the authors focus on the challenging problem of hyperspectral image classification, which has recently gained in popularity and attracted the interest of other scientific disciplines such as machine learning, image processing, and computer vision.
Abstract: The technological evolution of optical sensors over the last few decades has provided remote sensing analysts with rich spatial, spectral, and temporal information. In particular, the increase in spectral resolution of hyperspectral images (HSIs) and infrared sounders opens the doors to new application domains and poses new methodological challenges in data analysis. HSIs allow the characterization of objects of interest (e.g., land-cover classes) with unprecedented accuracy, and keeps inventories up to date. Improvements in spectral resolution have called for advances in signal processing and exploitation algorithms. This article focuses on the challenging problem of hyperspectral image classification, which has recently gained in popularity and attracted the interest of other scientific disciplines such as machine learning, image processing, and computer vision. In the remote sensing community, the term classification is used to denote the process that assigns single pixels to a set of classes, while the term segmentation is used for methods aggregating pixels into objects and then assigned to a class.

599 citations


Proceedings ArticleDOI
07 Sep 2014
TL;DR: The feasibility of the design through an analytical model, the viability of the designs through a prototype system, the challenges to a practical deployment including usability and scalability, and decimeter-level accuracy in both carefully controlled and more realistic human mobility scenarios are explored.
Abstract: We explore the indoor positioning problem with unmodified smartphones and slightly-modified commercial LED luminaires. The luminaires-modified to allow rapid, on-off keying-transmit their identifiers and/or locations encoded in human-imperceptible optical pulses. A camera-equipped smartphone, using just a single image frame capture, can detect the presence of the luminaires in the image, decode their transmitted identifiers and/or locations, and determine the smartphone's location and orientation relative to the luminaires. Continuous image capture and processing enables continuous position updates. The key insights underlying this work are (i) the driver circuits of emerging LED lighting systems can be easily modified to transmit data through on-off keying; (ii) the rolling shutter effect of CMOS imagers can be leveraged to receive many bits of data encoded in the optical transmissions with just a single frame capture, (iii) a camera is intrinsically an angle-of-arrival sensor, so the projection of multiple nearby light sources with known positions onto a camera's image plane can be framed as an instance of a sufficiently-constrained angle-of-arrival localization problem, and (iv) this problem can be solved with optimization techniques. We explore the feasibility of the design through an analytical model, demonstrate the viability of the design through a prototype system, discuss the challenges to a practical deployment including usability and scalability, and demonstrate decimeter-level accuracy in both carefully controlled and more realistic human mobility scenarios.

577 citations


Book ChapterDOI
06 Sep 2014
TL;DR: A new framework to filter images with the complete control of detail smoothing under a scale measure is proposed, based on a rolling guidance implemented in an iterative manner that converges quickly and achieves realtime performance and produces artifact-free results.
Abstract: Images contain many levels of important structures and edges. Compared to masses of research to make filters edge preserving, finding scale-aware local operations was seldom addressed in a practical way, albeit similarly vital in image processing and computer vision. We propose a new framework to filter images with the complete control of detail smoothing under a scale measure. It is based on a rolling guidance implemented in an iterative manner that converges quickly. Our method is simple in implementation, easy to understand, fully extensible to accommodate various data operations, and fast to produce results. Our implementation achieves realtime performance and produces artifact-free results in separating different scale structures. This filter also introduces several inspiring properties different from previous edge-preserving ones.

532 citations


Journal ArticleDOI
TL;DR: In this article, the authors focus on multiclass segmentation and detailed descriptions as to why a specific method may fail together with strategies for preventing the failure by applying suitable image enhancement prior to segmentation.
Abstract: Easier access to X-ray microtomography (μCT) facilities has provided much new insight from high-resolution imaging for various problems in porous media research. Pore space analysis with respect to functional properties usually requires segmentation of the intensity data into different classes. Image segmentation is a nontrivial problem that may have a profound impact on all subsequent image analyses. This review deals with two issues that are neglected in most of the recent studies on image segmentation: (i) focus on multiclass segmentation and (ii) detailed descriptions as to why a specific method may fail together with strategies for preventing the failure by applying suitable image enhancement prior to segmentation. In this way, the presented algorithms become very robust and are less prone to operator bias. Three different test images are examined: a synthetic image with ground-truth information, a synchrotron image of precision beads with three different fluids residing in the pore space, and a μCT image of a soil sample containing macropores, rocks, organic matter, and the soil matrix. Image blur is identified as the major cause for poor segmentation results. Other impairments of the raw data like noise, ring artifacts, and intensity variation can be removed with current image enhancement methods. Bayesian Markov random field segmentation, watershed segmentation, and converging active contours are well suited for multiclass segmentation, yet with different success to correct for partial volume effects and conserve small image features simultaneously.

475 citations


Journal ArticleDOI
TL;DR: The experimental results suggest that the paradigm of color normalization, as a preprocessing step, can significantly help histological image analysis algorithms to demonstrate stable performance which is insensitive to imaging conditions in general and scanner variations in particular.
Abstract: Histopathology diagnosis is based on visual examination of the morphology of histological sections under a microscope. With the increasing popularity of digital slide scanners, decision support systems based on the analysis of digital pathology images are in high demand. However, computerized decision support systems are fraught with problems that stem from color variations in tissue appearance due to variation in tissue preparation, variation in stain reactivity from different manufacturers/batches, user or protocol variation, and the use of scanners from different manufacturers. In this paper, we present a novel approach to stain normalization in histopathology images. The method is based on nonlinear mapping of a source image to a target image using a representation derived from color deconvolution. Color deconvolution is a method to obtain stain concentration values when the stain matrix, describing how the color is affected by the stain concentration, is given. Rather than relying on standard stain matrices, which may be inappropriate for a given image, we propose the use of a color-based classifier that incorporates a novel stain color descriptor to calculate image-specific stain matrix. In order to demonstrate the efficacy of the proposed stain matrix estimation and stain normalization methods, they are applied to the problem of tumor segmentation in breast histopathology images. The experimental results suggest that the paradigm of color normalization, as a preprocessing step, can significantly help histological image analysis algorithms to demonstrate stable performance which is insensitive to imaging conditions in general and scanner variations in particular.

458 citations


Posted Content
TL;DR: In this article, a self-contained view of sparse modeling for visual recognition and image processing is presented, where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.
Abstract: In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.

421 citations


Journal ArticleDOI
TL;DR: A new taxonomy based on image representations is introduced for a better understanding of state-of-the-art image denoising techniques and methods based on overcomplete representations using learned dictionaries perform better than others.
Abstract: Image denoising is a well explored topic in the field of image processing. In the past several decades, the progress made in image denoising has benefited from the improved modeling of natural images. In this paper, we introduce a new taxonomy based on image representations for a better understanding of state-of-the-art image denoising techniques. Within each category, several representative algorithms are selected for evaluation and comparison. The experimental results are discussed and analyzed to determine the overall advantages and disadvantages of each category. In general, the nonlocal methods within each category produce better denoising results than local ones. In addition, methods based on overcomplete representations using learned dictionaries perform better than others. The comprehensive study in this paper would serve as a good reference and stimulate new research ideas in image denoising.

Proceedings ArticleDOI
08 Feb 2014
TL;DR: Various types of features, feature extraction techniques and explaining in what scenario, which features extraction technique, will be better are discussed and referred in case of character recognition application.
Abstract: Feature plays a very important role in the area of image processing. Before getting features, various image preprocessing techniques like binarization, thresholding, resizing, normalization etc. are applied on the sampled image. After that, feature extraction techniques are applied to get features that will be useful in classifying and recognition of images. Feature extraction techniques are helpful in various image processing applications e.g. character recognition. As features define the behavior of an image, they show its place in terms of storage taken, efficiency in classification and obviously in time consumption also. Here in this paper, we are going to discuss various types of features, feature extraction techniques and explaining in what scenario, which features extraction technique, will be better. Hereby in this paper, we are going to refer features and feature extraction methods in case of character recognition application.

Journal ArticleDOI
TL;DR: A pupil function determination algorithm, termed embedded pupil function recovery (EPRY), which can be incorporated into the Fourier ptychographic microscopy (FPM) algorithm and recover both the Fouriers spectrum of sample and the pupil function of imaging system simultaneously simultaneously is developed and tested.
Abstract: We develop and test a pupil function determination algorithm, termed embedded pupil function recovery (EPRY), which can be incorporated into the Fourier ptychographic microscopy (FPM) algorithm and recover both the Fourier spectrum of sample and the pupil function of imaging system simultaneously. This EPRY-FPM algorithm eliminates the requirement of the previous FPM algorithm for a priori knowledge of the aberration in the imaging system to reconstruct a high quality image. We experimentally demonstrate the effectiveness of this algorithm by reconstructing high resolution, large field-of-view images of biological samples. We also illustrate that the pupil function we retrieve can be used to study the spatially varying aberration of a large field-of-view imaging system. We believe that this algorithm adds more flexibility to FPM and can be a powerful tool for the characterization of an imaging system’s aberration.

Journal ArticleDOI
TL;DR: A freely available open-source software platform-PLUS: Public software Library for Ultrasound-to facilitate rapid prototyping of ultrasound-guided intervention systems for translational clinical research and to become a widely used translational research prototyping platform.
Abstract: A variety of advanced image analysis methods have been under the development for ultrasound-guided interventions. Unfortunately, the transition from an image analysis algorithm to clinical feasibility trials as part of an intervention system requires integration of many components, such as imaging and tracking devices, data processing algorithms, and visualization software. The objective of our paper is to provide a freely available open-source software platform-PLUS: Public software Library for Ultrasound-to facilitate rapid prototyping of ultrasound-guided intervention systems for translational clinical research. PLUS provides a variety of methods for interventional tool pose and ultrasound image acquisition from a wide range of tracking and imaging devices, spatial and temporal calibration, volume reconstruction, simulated image generation, and recording and live streaming of the acquired data. This paper introduces PLUS, explains its functionality and architecture, and presents typical uses and performance in ultrasound-guided intervention systems. PLUS fulfills the essential requirements for the development of ultrasound-guided intervention systems and it aspires to become a widely used translational research prototyping platform. PLUS is freely available as open source software under BSD license and can be downloaded from http://www.plustoolkit.org.

Proceedings ArticleDOI
29 Sep 2014
TL;DR: This work proposes a novel approach to depth map computation that combines Bayesian estimation and recent development on convex optimization for image processing, and demonstrates that this method outperforms state-of-the-art techniques in terms of accuracy.
Abstract: In this paper, we solve the problem of estimating dense and accurate depth maps from a single moving camera. A probabilistic depth measurement is carried out in real time on a per-pixel basis and the computed uncertainty is used to reject erroneous estimations and provide live feedback on the reconstruction progress. Our contribution is a novel approach to depth map computation that combines Bayesian estimation and recent development on convex optimization for image processing. We demonstrate that our method outperforms state-of-the-art techniques in terms of accuracy, while exhibiting high efficiency in memory usage and computing power. We call our approach REMODE (REgularized MOnocular Depth Estimation) and the CUDA-based implementation runs at 30Hz on a laptop computer.

Journal ArticleDOI
19 Nov 2014
TL;DR: This work proposes an end-to-end system that is aware of the camera and image model, enforces natural-image priors, while jointly accounting for common image processing steps like demosaicking, denoising, deconvolution, and so forth, all directly in a given output representation.
Abstract: Conventional pipelines for capturing, displaying, and storing images are usually defined as a series of cascaded modules, each responsible for addressing a particular problem. While this divide-and-conquer approach offers many benefits, it also introduces a cumulative error, as each step in the pipeline only considers the output of the previous step, not the original sensor data. We propose an end-to-end system that is aware of the camera and image model, enforces natural-image priors, while jointly accounting for common image processing steps like demosaicking, denoising, deconvolution, and so forth, all directly in a given output representation (e.g., YUV, DCT). Our system is flexible and we demonstrate it on regular Bayer images as well as images from custom sensors. In all cases, we achieve large improvements in image quality and signal reconstruction compared to state-of-the-art techniques. Finally, we show that our approach is capable of very efficiently handling high-resolution images, making even mobile implementations feasible.

Book
19 Dec 2014
TL;DR: The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing, focusing on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.
Abstract: In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection - that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.

Journal ArticleDOI
TL;DR: This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother, focusing on sparse Laplacian matrices consisting of a data term and a prior term that approximate the solution of the memory- and computation-intensive large linear system by solving a sequence of 1D subsystems.
Abstract: This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory-and computation-intensive large linear system, defined over a d-dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ∼10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 < γ < 2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.

Journal ArticleDOI
TL;DR: A novel and flexible framework for constrained image reconstruction that uses low-rank matrix modeling of local k-space neighborhoods (LORAKS) and enables calibrationless use of phase constraints, while calibration-based support and phase constraints are commonly used in existing methods.
Abstract: Recent theoretical results on low-rank matrix reconstruction have inspired significant interest in low-rank modeling of MRI images. Existing approaches have focused on higher-dimensional scenarios with data available from multiple channels, timepoints, or image contrasts. The present work demonstrates that single-channel, single-contrast, single-timepoint k-space data can also be mapped to low-rank matrices when the image has limited spatial support or slowly varying phase. Based on this, we develop a novel and flexible framework for constrained image reconstruction that uses low-rank matrix modeling of local k-space neighborhoods (LORAKS). A new regularization penalty and corresponding algorithm for promoting low-rank are also introduced. The potential of LORAKS is demonstrated with simulated and experimental data for a range of denoising and sparse-sampling applications. LORAKS is also compared against state-of-the-art methods like homodyne reconstruction, l1-norm minimization, and total variation minimization, and is demonstrated to have distinct features and advantages. In addition, while calibration-based support and phase constraints are commonly used in existing methods, the LORAKS framework enables calibrationless use of these constraints.

Journal ArticleDOI
TL;DR: This work proposes simple and robust biologically motivated normalization techniques for multisequence brain imaging that have the same interpretation across acquisitions and satisfy the proposed criteria for the normalization of images.

Journal ArticleDOI
TL;DR: Qualitative and quantitative evaluations on a large set of abdominal and mediastinum CT images are carried out and the results show that the proposed ASDL method can be efficiently applied in most current CT systems.
Abstract: Low-dose computed tomography (LDCT) images are often severely degraded by amplified mottle noise and streak artifacts These artifacts are often hard to suppress without introducing tissue blurring effects In this paper, we propose to process LDCT images using a novel image-domain algorithm called "artifact suppressed dictionary learning (ASDL)" In this ASDL method, orientation and scale information on artifacts is exploited to train artifact atoms, which are then combined with tissue feature atoms to build three discriminative dictionaries The streak artifacts are cancelled via a discriminative sparse representation operation based on these dictionaries Then, a general dictionary learning processing is applied to further reduce the noise and residual artifacts Qualitative and quantitative evaluations on a large set of abdominal and mediastinum CT images are carried out and the results show that the proposed method can be efficiently applied in most current CT systems

Journal ArticleDOI
TL;DR: The simultaneous orthogonal matching pursuit technique is used to solve the nonlocal weighted joint sparsity model (NLW-JSM) and the proposed classification algorithm performs better than the other sparsity-based algorithms and the classical support vector machine hyperspectral classifier.
Abstract: As a powerful and promising statistical signal modeling technique, sparse representation has been widely used in various image processing and analysis fields. For hyperspectral image classification, previous studies have shown the effectiveness of the sparsity-based classification methods. In this paper, we propose a nonlocal weighted joint sparse representation classification (NLW-JSRC) method to improve the hyperspectral image classification result. In the joint sparsity model (JSM), different weights are utilized for different neighboring pixels around the central test pixel. The weight of one specific neighboring pixel is determined by the structural similarity between the neighboring pixel and the central test pixel, which is referred to as a nonlocal weighting scheme. In this paper, the simultaneous orthogonal matching pursuit technique is used to solve the nonlocal weighted joint sparsity model (NLW-JSM). The proposed classification algorithm was tested on three hyperspectral images. The experimental results suggest that the proposed algorithm performs better than the other sparsity-based algorithms and the classical support vector machine hyperspectral classifier.

Journal ArticleDOI
TL;DR: This protocol describes how to use several popular features of Vaa3D, including multidimensional image visualization, 3D image object generation and quantitative measurement,3D image comparison, fusion and management, and visualization of heterogeneous images and respective surface objects and extension of VAA3D functions using its plug-in interface.
Abstract: Open-Source 3D Visualization-Assisted Analysis (Vaa3D) is a software platform for the visualization and analysis of large-scale multidimensional images. In this protocol we describe how to use several popular features of Vaa3D, including (i) multidimensional image visualization, (ii) 3D image object generation and quantitative measurement, (iii) 3D image comparison, fusion and management, (iv) visualization of heterogeneous images and respective surface objects and (v) extension of Vaa3D functions using its plug-in interface. We also briefly demonstrate how to integrate these functions for complicated applications of microscopic image visualization and quantitative analysis using three exemplar pipelines, including an automated pipeline for image filtering, segmentation and surface generation; an automated pipeline for 3D image stitching; and an automated pipeline for neuron morphology reconstruction, quantification and comparison. Once a user is familiar with Vaa3D, visualization usually runs in real time and analysis takes less than a few minutes for a simple data set.

Journal ArticleDOI
TL;DR: It is concluded that the embedding and extraction of the proposed algorithm is well optimized, robust and show an improvement over other similar reported methods.
Abstract: This paper presents an optimized watermarking scheme based on the discrete wavelet transform (DWT) and singular value decomposition (SVD). The singular values of a binary watermark are embedded in singular values of the LL3 sub-band coefficients of the host image by making use of multiple scaling factors (MSFs). The MSFs are optimized using a newly proposed Firefly Algorithm having an objective function which is a linear combination of imperceptibility and robustness. The PSNR values indicate that the visual quality of the signed and attacked images is good. The embedding algorithm is robust against common image processing operations. It is concluded that the embedding and extraction of the proposed algorithm is well optimized, robust and show an improvement over other similar reported methods.

Journal ArticleDOI
TL;DR: The latest segmentation methods applied in medical image analysis are described and the advantages and disadvantages of each method are described besides examination of each algorithm with its application in Magnetic Resonance Imaging and Computed Tomography image analysis.
Abstract: Medical images have made a great impact on medicine, diagnosis, and treatment. The most important part of image processing is image segmentation. Many image segmentation methods for medical image analysis have been presented in this paper. In this paper, we have described the latest segmentation methods applied in medical image analysis. The advantages and disadvantages of each method are described besides examination of each algorithm with its application in Magnetic Resonance Imaging and Computed Tomography image analysis. Each algorithm is explained separately with its ability and features for the analysis of grey-level images. In order to evaluate the segmentation results, some popular benchmark measurements are presented in the final section.

Journal ArticleDOI
TL;DR: By compressing the size of the dictionary in the time domain, this work is able to speed up the pattern recognition algorithm, by a factor of between 3.4-4.8, without sacrificing the high signal-to-noise ratio of the original scheme presented previously.
Abstract: Magnetic resonance (MR) fingerprinting is a technique for acquiring and processing MR data that simultaneously provides quantitative maps of different tissue parameters through a pattern recognition algorithm. A predefined dictionary models the possible signal evolutions simulated using the Bloch equations with different combinations of various MR parameters and pattern recognition is completed by computing the inner product between the observed signal and each of the predicted signals within the dictionary. Though this matching algorithm has been shown to accurately predict the MR parameters of interest, one desires a more efficient method to obtain the quantitative images. We propose to compress the dictionary using the singular value decomposition, which will provide a low-rank approximation. By compressing the size of the dictionary in the time domain, we are able to speed up the pattern recognition algorithm, by a factor of between 3.4-4.8, without sacrificing the high signal-to-noise ratio of the original scheme presented previously.

Journal ArticleDOI
TL;DR: A Bayesian-based derivation of multiview deconvolution is presented that drastically improves the convergence time, and a fast implementation using graphics hardware is provided.
Abstract: Light-sheet fluorescence microscopy is able to image large specimens with high resolution by capturing the samples from multiple angles. Multiview deconvolution can substantially improve the resolution and contrast of the images, but its application has been limited owing to the large size of the data sets. Here we present a Bayesian-based derivation of multiview deconvolution that drastically improves the convergence time, and we provide a fast implementation using graphics hardware.

Patent
12 Mar 2014
TL;DR: In this article, the authors describe a system for generating restricted depth of field depth maps from a reference viewpoint using a set of images captured from different viewpoints, where depth estimation precision is higher for pixels with depth estimates within the range of distances corresponding to the restricted depth-of-field and lower for pixels having depth estimates outside of the ranges of distances correspond to the restrictions.
Abstract: Systems and methods are described for generating restricted depth of field depth maps. In one embodiment, an image processing pipeline application configures a processor to: determine a desired focal plane distance and a range of distances corresponding to a restricted depth of field for an image rendered from a reference viewpoint; generate a restricted depth of field depth map from the reference viewpoint using the set of images captured from different viewpoints, where depth estimation precision is higher for pixels with depth estimates within the range of distances corresponding to the restricted depth of field and lower for pixels with depth estimates outside of the range of distances corresponding to the restricted depth of field; and render a restricted depth of field image from the reference viewpoint using the set of images captured from different viewpoints and the restricted depth of field depth map.

Journal ArticleDOI
16 Oct 2014-Sensors
TL;DR: This paper presents an automatic crack detection and classification methodology for subway tunnel safety monitoring and presents a distance histogram based shape descriptor that effectively describes the spatial shape difference between cracks and other irrelevant objects.
Abstract: Cracks are an important indicator reflecting the safety status of infrastructures. This paper presents an automatic crack detection and classification methodology for subway tunnel safety monitoring. With the application of high-speed complementary metal-oxide-semiconductor (CMOS) industrial cameras, the tunnel surface can be captured and stored in digital images. In a next step, the local dark regions with potential crack defects are segmented from the original gray-scale images by utilizing morphological image processing techniques and thresholding operations. In the feature extraction process, we present a distance histogram based shape descriptor that effectively describes the spatial shape difference between cracks and other irrelevant objects. Along with other features, the classification results successfully remove over 90% misidentified objects. Also, compared with the original gray-scale images, over 90% of the crack length is preserved in the last output binary images. The proposed approach was tested on the safety monitoring for Beijing Subway Line 1. The experimental results revealed the rules of parameter settings and also proved that the proposed approach is effective and efficient for automatic crack detection and classification.

Journal ArticleDOI
TL;DR: This paper introduces new SCEs based on finite-state machines based on FSMs for the task of digital image processing and compares the error tolerance, hardware area, and latency of stochastic implementations to those of conventional deterministic implementations using binary radix encoding.
Abstract: Maintaining the reliability of integrated circuits as transistor sizes continue to shrink to nanoscale dimensions is a significant looming challenge for the industry. Computation on stochastic bit streams, which could replace conventional deterministic computation based on a binary radix, allows similar computation to be performed more reliably and often with less hardware area. Prior work discussed a variety of specific stochastic computational elements (SCEs) for applications such as artificial neural networks and control systems. Recently, very promising new SCEs have been developed based on finite-state machines (FSMs). In this paper, we introduce new SCEs based on FSMs for the task of digital image processing. We present five digital image processing algorithms as case studies of practical applications of the technique. We compare the error tolerance, hardware area, and latency of stochastic implementations to those of conventional deterministic implementations using binary radix encoding. We also provide a rigorous analysis of a particular function, namely the stochastic linear gain function, which had only been validated experimentally in prior work.