scispace - formally typeset
Search or ask a question
Proceedings Article

Image Processing

01 Jan 1994-
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.
Citations
More filters
Journal ArticleDOI
TL;DR: It is demonstrated that sarcomere protein gene mutations activate proliferative and profibrotic signals in non-myocyte cells to produce pathologic remodeling in HCM and a potentially important factor contributing to diastolic dysfunction and heart failure.
Abstract: Mutations in sarcomere protein genes can cause hypertrophic cardiomyopathy (HCM), a disorder characterized by myocyte enlargement, fibrosis, and impaired ventricular relaxation. Here, we demonstrate that sarcomere protein gene mutations activate proliferative and profibrotic signals in non-myocyte cells to produce pathologic remodeling in HCM. Gene expression analyses of non-myocyte cells isolated from HCM mouse hearts showed increased levels of RNAs encoding cell-cycle proteins, Tgf-β, periostin, and other profibrotic proteins. Markedly increased BrdU labeling, Ki67 antigen expression, and periostin immunohistochemistry in the fibrotic regions of HCM hearts confirmed the transcriptional profiling data. Genetic ablation of periostin in HCM mice reduced but did not extinguish non-myocyte proliferation and fibrosis. In contrast, administration of Tgf-β-neutralizing antibodies abrogated non-myocyte proliferation and fibrosis. Chronic administration of the angiotensin II type 1 receptor antagonist losartan to mutation-positive, hypertrophy-negative (prehypertrophic) mice prevented the emergence of hypertrophy, non-myocyte proliferation, and fibrosis. Losartan treatment did not reverse pathologic remodeling of established HCM but did reduce non-myocyte proliferation. These data define non-myocyte activation of Tgf-β signaling as a pivotal mechanism for increased fibrosis in HCM and a potentially important factor contributing to diastolic dysfunction and heart failure. Preemptive pharmacologic inhibition of Tgf-β signals warrants study in human patients with sarcomere gene mutations.

391 citations

Journal ArticleDOI
06 May 2020
TL;DR: A taxonomy that can be used to categorize different problems and reconstruction methods in deep neural networks and discusses the tradeoffs associated with these different reconstruction approaches, caveats and common failure modes.
Abstract: Recent work in machine learning shows that deep neural networks can be used to solve a wide variety of inverse problems arising in computational imaging. We explore the central prevailing themes of this emerging area and present a taxonomy that can be used to categorize different problems and reconstruction methods. Our taxonomy is organized along two central axes: (1) whether or not a forward model is known and to what extent it is used in training and testing, and (2) whether or not the learning is supervised or unsupervised, i.e., whether or not the training relies on access to matched ground truth image and measurement pairs. We also discuss the tradeoffs associated with these different reconstruction approaches, caveats and common failure modes, plus open problems and avenues for future work.

390 citations


Cites background from "Image Processing"

  • ...Table 1: Examples of inverse problems in imaging Application Forward model Notes Denoising [58] A = I I is the identity matrix Deconvolution [58, 59] A(x) = h ∗ x h is a known blur kernel and ∗ denotes convolution....

    [...]

  • ...Computed tomography [58] A = R R is the discrete Radon transform [66]....

    [...]

Journal ArticleDOI
Weikang Qian1, Xin Li1, Marc D. Riedel1, Kia Bazargan1, David J. Lilja1 
TL;DR: The concept of stochastic logic is applied to a reconfigurable architecture that implements processing operations on a datapath and it is found to be much more tolerant of soft errors than conventional hardware implementations.
Abstract: Mounting concerns over variability, defects, and noise motivate a new approach for digital circuitry: stochastic logic, that is to say, logic that operates on probabilistic signals and so can cope with errors and uncertainty. Techniques for probabilistic analysis of circuits and systems are well established. We advocate a strategy for synthesis. In prior work, we described a methodology for synthesizing stochastic logic, that is to say logic that operates on probabilistic bit streams. In this paper, we apply the concept of stochastic logic to a reconfigurable architecture that implements processing operations on a datapath. We analyze cost as well as the sources of error: approximation, quantization, and random fluctuations. We study the effectiveness of the architecture on a collection of benchmarks for image processing. The stochastic architecture requires less area than conventional hardware implementations. Moreover, it is much more tolerant of soft errors (bit flips) than these deterministic implementations. This fault tolerance scales gracefully to very large numbers of errors.

367 citations

Journal ArticleDOI
TL;DR: Pore network models are found to be a valuable tool for understanding and predicting meso-scale phenomena, linking single pore processes, where other techniques are more accurate, and the homogenised continuum porous media, used by engineering community.

367 citations


Cites methods from "Image Processing"

  • ...17 Micro-CT is a non-destructive and non-invasive imaging technique used to characterize 18 cross-sectional and three-dimensional internal structures (Hazlett, 1995; Lindquist et al., 19 1996; Schlüter et al., 2014)....

    [...]

  • ...Micro-CT is a non-destructive and non-invasive imaging technique used to characterise cross-sectional and three-dimensional internal structures (Hazlett, 1995; Lindquist et al., 1996; Schlüter et al., 2014)....

    [...]

  • ...7 configurations are used in systems that seek submicron resolution; a good review can be 1 found in (Schlüter et al., 2014; Withers, 2007)....

    [...]

  • ...Three main configurations are used in systems that seek submicron resolution; a good review can be found in (Schlüter et al., 2014; Withers, 2007)....

    [...]

Journal ArticleDOI
TL;DR: The essential techniques to the success of these applications, such as bioimage feature identification, segmentation and tracking, registration, annotation, mining, image data management and visualization, are summarized, along with a brief overview of the available bioimage databases, analysis tools and other resources.
Abstract: In recent years, the deluge of complicated molecular and cellular microscopic images creates compelling challenges for the image computing community. There has been an increasing focus on developing novel image processing, data mining, database and visualization techniques to extract, compare, search and manage the biological knowledge in these data-intensive problems. This emerging new area of bioinformatics can be called ‘bioimage informatics’. This article reviews the advances of this field from several aspects, including applications, key techniques, available tools and resources. Application examples such as high-throughput/high-content phenotyping and atlas building for model organisms demonstrate the importance of bioimage informatics. The essential techniques to the success of these applications, such as bioimage feature identification, segmentation and tracking, registration, annotation, mining, image data management and visualization, are further summarized, along with a brief overview of the available bioimage databases, analysis tools and other resources. Contact: pengh@janelia.hhmi.org Supplementary information: Supplementary data are available at Bioinformatics online.

366 citations

References
More filters
Journal ArticleDOI
01 Nov 1973
TL;DR: These results indicate that the easily computable textural features based on gray-tone spatial dependancies probably have a general applicability for a wide variety of image-classification applications.
Abstract: Texture is one of the important characteristics used in identifying objects or regions of interest in an image, whether the image be a photomicrograph, an aerial photograph, or a satellite image. This paper describes some easily computable textural features based on gray-tone spatial dependancies, and illustrates their application in category-identification tasks of three different kinds of image data: photomicrographs of five kinds of sandstones, 1:20 000 panchromatic aerial photographs of eight land-use categories, and Earth Resources Technology Satellite (ERTS) multispecial imagery containing seven land-use categories. We use two kinds of decision rules: one for which the decision regions are convex polyhedra (a piecewise linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89 percent for the photomicrographs, 82 percent for the aerial photographic imagery, and 83 percent for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications.

20,442 citations

Book
03 Oct 1988
TL;DR: This chapter discusses two Dimensional Systems and Mathematical Preliminaries and their applications in Image Analysis and Computer Vision, as well as image reconstruction from Projections and image enhancement.
Abstract: Introduction. 1. Two Dimensional Systems and Mathematical Preliminaries. 2. Image Perception. 3. Image Sampling and Quantization. 4. Image Transforms. 5. Image Representation by Stochastic Models. 6. Image Enhancement. 7. Image Filtering and Restoration. 8. Image Analysis and Computer Vision. 9. Image Reconstruction From Projections. 10. Image Data Compression.

8,504 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
TL;DR: Eight constructs decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent a cellular valves and intact chamber geometry that could generate pump function in a modified working heart preparation.
Abstract: About 3,000 individuals in the United States are awaiting a donor heart; worldwide, 22 million individuals are living with heart failure. A bioartificial heart is a theoretical alternative to transplantation or mechanical left ventricular support. Generating a bioartificial heart requires engineering of cardiac architecture, appropriate cellular constituents and pump function. We decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent acellular valves and intact chamber geometry. To mimic cardiac cell composition, we reseeded these constructs with cardiac or endothelial cells. To establish function, we maintained eight constructs for up to 28 d by coronary perfusion in a bioreactor that simulated cardiac physiology. By day 4, we observed macroscopic contractions. By day 8, under physiological load and electrical stimulation, constructs could generate pump function (equivalent to about 2% of adult or 25% of 16-week fetal heart function) in a modified working heart preparation.

2,454 citations

Journal ArticleDOI
01 Sep 1997
TL;DR: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment.
Abstract: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification. The motivation for this endeavor stems from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment. In particular the biomedical literature suggests that irises are as distinct as fingerprints or patterns of retinal blood vessels. Further, since the iris is an overt body, its appearance is amenable to remote examination with the aid of a machine vision system. The body of this paper details issues in the design and operation of such systems. For the sake of illustration, extant systems are described in some amount of detail.

2,046 citations