scispace - formally typeset
Search or ask a question
Proceedings Article

Image Processing

01 Jan 1994-
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.
Citations
More filters
Book
03 Oct 1988
TL;DR: This chapter discusses two Dimensional Systems and Mathematical Preliminaries and their applications in Image Analysis and Computer Vision, as well as image reconstruction from Projections and image enhancement.
Abstract: Introduction. 1. Two Dimensional Systems and Mathematical Preliminaries. 2. Image Perception. 3. Image Sampling and Quantization. 4. Image Transforms. 5. Image Representation by Stochastic Models. 6. Image Enhancement. 7. Image Filtering and Restoration. 8. Image Analysis and Computer Vision. 9. Image Reconstruction From Projections. 10. Image Data Compression.

8,504 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
TL;DR: Eight constructs decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent a cellular valves and intact chamber geometry that could generate pump function in a modified working heart preparation.
Abstract: About 3,000 individuals in the United States are awaiting a donor heart; worldwide, 22 million individuals are living with heart failure. A bioartificial heart is a theoretical alternative to transplantation or mechanical left ventricular support. Generating a bioartificial heart requires engineering of cardiac architecture, appropriate cellular constituents and pump function. We decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent acellular valves and intact chamber geometry. To mimic cardiac cell composition, we reseeded these constructs with cardiac or endothelial cells. To establish function, we maintained eight constructs for up to 28 d by coronary perfusion in a bioreactor that simulated cardiac physiology. By day 4, we observed macroscopic contractions. By day 8, under physiological load and electrical stimulation, constructs could generate pump function (equivalent to about 2% of adult or 25% of 16-week fetal heart function) in a modified working heart preparation.

2,454 citations

Journal ArticleDOI
01 Sep 1997
TL;DR: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment.
Abstract: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification. The motivation for this endeavor stems from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment. In particular the biomedical literature suggests that irises are as distinct as fingerprints or patterns of retinal blood vessels. Further, since the iris is an overt body, its appearance is amenable to remote examination with the aid of a machine vision system. The body of this paper details issues in the design and operation of such systems. For the sake of illustration, extant systems are described in some amount of detail.

2,046 citations


Cites methods from "Image Processing"

  • ...system makes us of an isotropic bandpass decomposition derived from application of Laplacian of Gaussian filters [25], [29] to the image data....

    [...]

  • ...In practice, the filtered image is realized as a Laplacian pyramid [8], [29]....

    [...]

Journal ArticleDOI
TL;DR: This paper identifies some promising techniques for image retrieval according to standard principles and examines implementation procedures for each technique and discusses its advantages and disadvantages.

1,910 citations


Cites background or methods from "Image Processing"

  • ...Structural description of chromosome shape (reprinted from [14])....

    [...]

  • ...Common invariants include (i) geometric invariants such as cross-ratio, length ratio, distance ratio, angle, area [69], triangle [70], invariants from coplanar points [14]; (ii) algebraic invariants such as determinant, eigenvalues [71], trace [14]; (iii) di<erential invariants such as curvature, torsion and Gaussian curvature....

    [...]

  • ...Designers of shape invariants argue that although most of other shape representation techniques are invariant under similarity transformations (rotation, translation and scaling), they depend on viewpoint [14]....

    [...]

  • ...The extracting of the convex hull can use both boundary tracing method [14] and morphological methods [11,15]....

    [...]

  • ...Assuming the shape boundary has been represented as a shape signature z(i), the rth moment mr and central moment r can be estimated as [14]...

    [...]

References
More filters
Journal ArticleDOI
TL;DR: The key idea is to represent the discrete shift maps on a smooth basis which can be obtained by the diffusion maps algorithm which converges to a Galerkin projection of the semigroup solution to the underlying dynamics on a basis adapted to the invariant measure.
Abstract: This paper presents a nonparametric modeling approach for forecasting stochastic dynamical systems on low-dimensional manifolds. The key idea is to represent the discrete shift maps on a smooth basis which can be obtained by the diffusion maps algorithm. In the limit of large data, this approach converges to a Galerkin projection of the semigroup solution to the underlying dynamics on a basis adapted to the invariant measure. This approach allows one to quantify uncertainties (in fact, evolve the probability distribution) for nontrivial dynamical systems with equation-free modeling. We verify our approach on various examples, ranging from an inhomogeneous anisotropic stochastic differential equation on a torus, the chaotic Lorenz three-dimensional model, and the Ni\~no-3.4 data set which is used as a proxy of the El Ni\~no Southern Oscillation.

111 citations

Journal ArticleDOI
TL;DR: The results suggest that CLAVATA was a genetic novelty enabling the morphological innovation of 3D growth in land plants and thatCLAVATA-like peptides act via conserved receptor components in Physcomitrella.

110 citations

Journal ArticleDOI
01 Dec 2020
TL;DR: Mobile, automated sensor-based systems will become increasingly common in controlled conditions and, eventually, in the field for measuring plant disease severity for the purpose of research and decision making.
Abstract: The severity of plant diseases, traditionally the proportion of the plant tissue exhibiting symptoms, is a key quantitative variable to know for many diseases and is prone to error. Good quality disease severity data should be accurate (close to the true value). Earliest quantification of disease severity was by visual estimates. Sensor-based image analysis including visible spectrum and hyperspectral and multispectral sensors are established technologies that promise to substitute, or complement visual ratings. Indeed, these technologies have measured disease severity accurately under controlled conditions but are yet to demonstrate their full potential for accurate measurement under field conditions. Sensor technology is advancing rapidly, and artificial intelligence may help overcome issues for automating severity measurement under hyper-variable field conditions. The adoption of appropriate scales, training, instruction and aids (standard area diagrams) has contributed to improved accuracy of visual estimates. The apogee of accuracy for visual estimation is likely being approached, and any remaining increases in accuracy are likely to be small. Due to automation and rapidity, sensor-based measurement offers potential advantages compared with visual estimates, but the latter will remain important for years to come. Mobile, automated sensor-based systems will become increasingly common in controlled conditions and, eventually, in the field for measuring plant disease severity for the purpose of research and decision making.

109 citations

Journal ArticleDOI
01 Sep 2007-Stroke
TL;DR: T1AM and T0AM, are potent neuroprotectants in acute stroke and T1AM can be used as antecedent treatment to induce neuroprotection against subsequent ischemia, and hypothermia induced by T1 AM and T 0AM may underlie neuroprotection.
Abstract: Background and Purpose— Mild hypothermia confers profound neuroprotection in ischemia. We recently discovered 2 natural derivatives of thyroxine, 3-iodothyronamine (T1AM) and thyronamine (T0AM), that when administered to rodents lower body temperature for several hours without induction of a compensatory homeostatic response. We tested whether T1AM- and T0AM-induced hypothermia protects against brain injury from experimental stroke. Methods— We tested T1AM and T0AM 1 hour after and 2 days before stroke in a mouse model of focal ischemia. To determine whether T1AM and T0AM require hypothermia to protect against stroke injury, the induction of hypothermia was prevented. Results— T1AM and T0AM administration reduced body temperature from 37°C to 31°C. Mice given T1AM or T0AM after the ischemic period had significantly smaller infarcts compared with controls. Mice preconditioned with T1AM before ischemia displayed significantly smaller infarcts compared with controls. Pre- and postischemia treatments required...

109 citations

Journal ArticleDOI
TL;DR: An overview of the various technologies that have been developed in recent years to assist the visually impaired in recognizing generic objects in an indoors environment with a focus on approaches based on computer vision is provided.
Abstract: Though several electronic assistive devices have been developed for the visually impaired in the past few decades, however, relatively few solutions have been devised to aid them in recognizing generic objects in their environment, particularly indoors. Nevertheless, research in this area is gaining momentum. Among the various technologies being utilized for this purpose, computer vision based solutions are emerging as one of the most promising options mainly due to their affordability and accessibility. This paper provides an overview of the various technologies that have been developed in recent years to assist the visually impaired in recognizing generic objects in an indoors environment with a focus on approaches based on computer vision. It aims to introduce researchers to the latest trends in this area as well as to serve as a resource for developers who wish to incorporate such solutions into their own work.

108 citations