scispace - formally typeset
Search or ask a question
Proceedings Article

Image Processing

01 Jan 1994-
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.
Citations
More filters
Journal ArticleDOI
01 Jul 2016
TL;DR: This survey is the first review about the hybridization of DMs and MHs, and proposes some guidelines for choosing and designing the most appropriate combination of deformable models and metaheuristics when facing a given segmentation problem.
Abstract: Graphical abstractDisplay Omitted HighlightsMetaheuristics (MHs) are general-purpose stochastic optimization methods.A Deformable Model (DM) tries to maximize its overlap with the object to segment.This survey is the first review about the hybridization of DMs and MHs.We provide guidelines to choose/design your hybrid segmentation approach.This review paper studies, analyzes and contextualizes more than 120 papers.MHs help in parameters selection, initial boundary location and DM contour evolution. Deformable models are segmentation techniques that adapt a curve with the goal of maximizing its overlap with the actual contour of an object of interest within an image. Such a process requires the definition of an optimization framework whose most critical issues include: choosing an optimization method which exhibits robustness with respect to noisy and highly-multimodal search spaces; selecting the optimization and segmentation algorithms' parameters; choosing the representation for encoding prior knowledge on the image domain of interest; and initializing the curve in a location which favors its convergence onto the boundary of the object of interest.All these problems are extensively discussed within this manuscript, with reference to the family of global stochastic optimization techniques that are generally termed metaheuristics, and are designed to solve complex optimization and machine learning problems. In particular, we present a complete study on the application of metaheuristics to image segmentation based on deformable models. This survey studies, analyzes and contextualizes the most notable and recent works on this topic, proposing an original categorization for these hybrid approaches. It aims to serve as a reference work which proposes some guidelines for choosing and designing the most appropriate combination of deformable models and metaheuristics when facing a given segmentation problem.After recalling the principles underlying deformable models and metaheuristics, we broadly review the different hybrid approaches employed to solve image segmentation problems, and conclude with a general discussion about methodological and design issues as well as future research and application trends.

81 citations

Journal ArticleDOI
TL;DR: It is suggested that little repair of freezing damage is possible in P. isabella caterpillars and repeated freeze–thaw cycles may present significant challenges to survival in this species.
Abstract: SUMMARY Repeated freeze–thaw cycles are common and are increasing in frequency with climate change in many temperate locations, yet understanding of their impact on freeze-tolerant insects is extremely limited. We investigated the effects of repeated freezing and thawing on the freeze-tolerant final instar caterpillars of the moth Pyrrharctia isabella (Lepidoptera: Arctiidae) by subjecting individuals to either a single sustained 35 h freeze or five 7 h freezes. Sub-lethal effects were quantified with changes in three broad groups of measures: (1) cold hardiness, (2) metabolic rate and energy reserves and (3) survival after challenge with fungal spores. Repeated freeze–thaw cycles increased mortality to almost 30% and increased tissue damage in Malpighian tubules and hemocytes. Repeated freezing increased caterpillar glycerol concentration by 0.82 mol l –1 . There were no changes in metabolic rate or energy reserves with repeated freezing. For the first time, we report increased survival after immune challenge in caterpillars after freezing and suggest that this may be linked to wounding during freezing. We suggest that little repair of freezing damage is possible in P. isabella caterpillars and repeated freeze–thaw cycles may present significant challenges to survival in this species.

81 citations


Cites methods from "Image Processing"

  • ...Image analysis was conducted with ImageJ software (Abramoff et al., 2004)....

    [...]

Journal ArticleDOI
TL;DR: This work examines two mutations of the same motif in a vacuolar sorting receptor that have opposite effects on the protein’s localization, shedding light on the full transport cycle of the receptor and suggests that the prevacuolar compartment matures by gradual receptor depletion, leading to the formation of a late prevacUolar compartment situated between theprevacuolars compartment and the vacuole.
Abstract: Plant vacuolar sorting receptors (VSRs) display cytosolic Tyr motifs (YMPL) for clathrin-mediated anterograde transport to the prevacuolar compartment. Here, we show that the same motif is also required for VSR recycling. A Y612A point mutation in Arabidopsis thaliana VSR2 leads to a quantitative shift in VSR2 steady state levels from the prevacuolar compartment to the trans-Golgi network when expressed in Nicotiana tabacum. By contrast, the L615A mutant VSR2 leaks strongly to vacuoles and accumulates in a previously undiscovered compartment. The latter is shown to be distinct from the Golgi stacks, the trans-Golgi network, and the prevacuolar compartment but is characterized by high concentrations of soluble vacuolar cargo and the rab5 GTPase Rha1(RabF2a). The results suggest that the prevacuolar compartment matures by gradual receptor depletion, leading to the formation of a late prevacuolar compartment situated between the prevacuolar compartment and the vacuole.

81 citations

Journal ArticleDOI
TL;DR: Retinal images revealed a severely disrupted photoreceptor mosaic in the fovea and parafovea, where the size and density of the visible photoreceptors resembled that of normal rods.

81 citations

Journal ArticleDOI
TL;DR: Gaussian scale-space theory is used to derive a multiscale model for edge analysis that predicts remarkably accurately results on human perception of edge location and blur for a wide range of luminance profiles, including the surprising finding that blurred edges look sharper when their length is made shorter.
Abstract: To make vision possible, the visual nervous system must represent the most informative features in the light pattern captured by the eye. Here we use Gaussian scale-space theory to derive a multiscale model for edge analysis and we test it in perceptual experiments. At all scales there are two stages of spatial filtering. An odd-symmetric, Gaussian first derivative filter provides the input to a Gaussian second derivative filter. Crucially, the output at each stage is half-wave rectified before feeding forward to the next. This creates nonlinear channels selectively responsive to one edge polarity while suppressing spurious or "phantom" edges. The two stages have properties analogous to simple and complex cells in the visual cortex. Edges are found as peaks in a scale-space response map that is the output of the second stage. The position and scale of the peak response identify the location and blur of the edge. The model predicts remarkably accurately our results on human perception of edge location and blur for a wide range of luminance profiles, including the surprising finding that blurred edges look sharper when their length is made shorter. The model enhances our understanding of early vision by integrating computational, physiological, and psychophysical approaches. © ARVO.

81 citations


Cites background from "Image Processing"

  • ...We have discovered (or rediscovered; Kovasznay & Joseph, 1955) that a simple, physiologically plausible modification to the linear N3 scheme solves the multiple peaks problem and makes accurate predictions about perceived edge location and blur....

    [...]

References
More filters
Journal ArticleDOI
01 Nov 1973
TL;DR: These results indicate that the easily computable textural features based on gray-tone spatial dependancies probably have a general applicability for a wide variety of image-classification applications.
Abstract: Texture is one of the important characteristics used in identifying objects or regions of interest in an image, whether the image be a photomicrograph, an aerial photograph, or a satellite image. This paper describes some easily computable textural features based on gray-tone spatial dependancies, and illustrates their application in category-identification tasks of three different kinds of image data: photomicrographs of five kinds of sandstones, 1:20 000 panchromatic aerial photographs of eight land-use categories, and Earth Resources Technology Satellite (ERTS) multispecial imagery containing seven land-use categories. We use two kinds of decision rules: one for which the decision regions are convex polyhedra (a piecewise linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89 percent for the photomicrographs, 82 percent for the aerial photographic imagery, and 83 percent for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications.

20,442 citations

Book
03 Oct 1988
TL;DR: This chapter discusses two Dimensional Systems and Mathematical Preliminaries and their applications in Image Analysis and Computer Vision, as well as image reconstruction from Projections and image enhancement.
Abstract: Introduction. 1. Two Dimensional Systems and Mathematical Preliminaries. 2. Image Perception. 3. Image Sampling and Quantization. 4. Image Transforms. 5. Image Representation by Stochastic Models. 6. Image Enhancement. 7. Image Filtering and Restoration. 8. Image Analysis and Computer Vision. 9. Image Reconstruction From Projections. 10. Image Data Compression.

8,504 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
TL;DR: Eight constructs decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent a cellular valves and intact chamber geometry that could generate pump function in a modified working heart preparation.
Abstract: About 3,000 individuals in the United States are awaiting a donor heart; worldwide, 22 million individuals are living with heart failure. A bioartificial heart is a theoretical alternative to transplantation or mechanical left ventricular support. Generating a bioartificial heart requires engineering of cardiac architecture, appropriate cellular constituents and pump function. We decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent acellular valves and intact chamber geometry. To mimic cardiac cell composition, we reseeded these constructs with cardiac or endothelial cells. To establish function, we maintained eight constructs for up to 28 d by coronary perfusion in a bioreactor that simulated cardiac physiology. By day 4, we observed macroscopic contractions. By day 8, under physiological load and electrical stimulation, constructs could generate pump function (equivalent to about 2% of adult or 25% of 16-week fetal heart function) in a modified working heart preparation.

2,454 citations

Journal ArticleDOI
01 Sep 1997
TL;DR: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment.
Abstract: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification. The motivation for this endeavor stems from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment. In particular the biomedical literature suggests that irises are as distinct as fingerprints or patterns of retinal blood vessels. Further, since the iris is an overt body, its appearance is amenable to remote examination with the aid of a machine vision system. The body of this paper details issues in the design and operation of such systems. For the sake of illustration, extant systems are described in some amount of detail.

2,046 citations