scispace - formally typeset
Search or ask a question
Proceedings Article

Image Processing

01 Jan 1994-
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.
Citations
More filters
Journal ArticleDOI
TL;DR: A model for PSII remodeling during state transitions is proposed, which involves division of the megacomplex into supercomplexes, triggered by phosphorylation of LHCII type I, followed by LHC II undocking from the supercomplex, triggeredBy phosphorylated minor LHCIIs and PSII core subunits.
Abstract: State transitions, or the redistribution of light-harvesting complex II (LHCII) proteins between photosystem I (PSI) and photosystem II (PSII), balance the light-harvesting capacity of the two photosystems to optimize the efficiency of photosynthesis. Studies on the migration of LHCII proteins have focused primarily on their reassociation with PSI, but the molecular details on their dissociation from PSII have not been clear. Here, we compare the polypeptide composition, supramolecular organization, and phosphorylation of PSII complexes under PSI- and PSII-favoring conditions (State 1 and State 2, respectively). Three PSII fractions, a PSII core complex, a PSII supercomplex, and a multimer of PSII supercomplex or PSII megacomplex, were obtained from a transformant of the green alga Chlamydomonas reinhardtii carrying a His-tagged CP47. Gel filtration and single particles on electron micrographs showed that the megacomplex was predominant in State 1, whereas the core complex was predominant in State 2, indicating that LHCIIs are dissociated from PSII upon state transition. Moreover, in State 2, strongly phosphorylated LHCII type I was found in the supercomplex but not in the megacomplex. Phosphorylated minor LHCIIs (CP26 and CP29) were found only in the unbound form. The PSII subunits were most phosphorylated in the core complex. Based on these observations, we propose a model for PSII remodeling during state transitions, which involves division of the megacomplex into supercomplexes, triggered by phosphorylation of LHCII type I, followed by LHCII undocking from the supercomplex, triggered by phosphorylation of minor LHCIIs and PSII core subunits.

123 citations

Proceedings ArticleDOI
24 Oct 1999
TL;DR: A piecewise mapping function according to human visual sensitivity of contrast is used so that adaptivity can be achieved without extra bits for overhead in the embedding of multimedia data into a host image.
Abstract: We propose in this paper a novel method for embedding multimedia data (including audio, image, video, or text; compressed or non-compressed) into a host image. The classical LSB concept is adopted, but with the number of LSBs adapting to pixels of different graylevels. A piecewise mapping function according to human visual sensitivity of contrast is used so that adaptivity can be achieved without extra bits for overhead. The leading information for data decoding is few, no more than 3 bytes. Experiments show that a large amount of bit streams (nearly 30%-45% of the host image) can be embedded without sever degradation of the image quality (33-40 dB, depending on the volume of embedded bits).

121 citations

Journal ArticleDOI
TL;DR: A critical comparison of different state-of-the-art computer vision methods proposed by researchers for classifying fruit and vegetable is presented.

120 citations


Cites background from "Image Processing"

  • ...An adaptive threshold selection based segmentation has been presented in [94]....

    [...]

Journal ArticleDOI
TL;DR: Analysis of mean-standardized differences in overall trait means and reaction norm shape revealed that evolutionary divergence of curvature is common and should be considered an important aspect of plasticity, together with slope.
Abstract: Understanding the evolution of reaction norms remains a major challenge in ecology and evolution. Investigating evolutionary divergence in reaction norm shapes between populations and closely related species is one approach to providing insights. Here we use a meta-analytic approach to compare divergence in reaction norms of closely related species or populations of animals and plants across types of traits and environments. We quantified mean-standardized differences in overall trait means (Offset) and reaction norm shape (including both Slope and Curvature). These analyses revealed that differences in shape (Slope and Curvature together) were generally greater than differences in Offset. Additionally, differences in Curvature were generally greater than differences in Slope. The type of taxon contrast (species vs. population), trait, organism, and the type and novelty of environments all contributed to the best-fitting models, especially for Offset, Curvature, and the total differences (Total) b...

119 citations


Cites methods from "Image Processing"

  • ...We employed ImageJ (Rasband 1997–2011; Abramoff et al. 2004) to determine the trait values from graphics....

    [...]

Journal ArticleDOI
TL;DR: Virtual finger (VF) is developed to generate 3D curves, points and regions-of-interest in the 3D space of a volumetric image with a single finger operation, such as a computer mouse stroke, or click or zoom from the 2D-projection plane of an image as visualized with a computer.
Abstract: Three-dimensional (3D) bioimaging, visualization and data analysis are in strong need of powerful 3D exploration techniques. We develop virtual finger (VF) to generate 3D curves, points and regions-of-interest in the 3D space of a volumetric image with a single finger operation, such as a computer mouse stroke, or click or zoom from the 2D-projection plane of an image as visualized with a computer. VF provides efficient methods for acquisition, visualization and analysis of 3D images for roundworm, fruitfly, dragonfly, mouse, rat and human. Specifically, VF enables instant 3D optical zoom-in imaging, 3D free-form optical microsurgery, and 3D visualization and annotation of terabytes of whole-brain image volumes. VF also leads to orders of magnitude better efficiency of automated 3D reconstruction of neurons and similar biostructures over our previous systems. We use VF to generate from images of 1,107 Drosophila GAL4 lines a projectome of a Drosophila brain.

119 citations

References
More filters
Journal ArticleDOI
01 Nov 1973
TL;DR: These results indicate that the easily computable textural features based on gray-tone spatial dependancies probably have a general applicability for a wide variety of image-classification applications.
Abstract: Texture is one of the important characteristics used in identifying objects or regions of interest in an image, whether the image be a photomicrograph, an aerial photograph, or a satellite image. This paper describes some easily computable textural features based on gray-tone spatial dependancies, and illustrates their application in category-identification tasks of three different kinds of image data: photomicrographs of five kinds of sandstones, 1:20 000 panchromatic aerial photographs of eight land-use categories, and Earth Resources Technology Satellite (ERTS) multispecial imagery containing seven land-use categories. We use two kinds of decision rules: one for which the decision regions are convex polyhedra (a piecewise linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89 percent for the photomicrographs, 82 percent for the aerial photographic imagery, and 83 percent for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications.

20,442 citations

Book
03 Oct 1988
TL;DR: This chapter discusses two Dimensional Systems and Mathematical Preliminaries and their applications in Image Analysis and Computer Vision, as well as image reconstruction from Projections and image enhancement.
Abstract: Introduction. 1. Two Dimensional Systems and Mathematical Preliminaries. 2. Image Perception. 3. Image Sampling and Quantization. 4. Image Transforms. 5. Image Representation by Stochastic Models. 6. Image Enhancement. 7. Image Filtering and Restoration. 8. Image Analysis and Computer Vision. 9. Image Reconstruction From Projections. 10. Image Data Compression.

8,504 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
TL;DR: Eight constructs decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent a cellular valves and intact chamber geometry that could generate pump function in a modified working heart preparation.
Abstract: About 3,000 individuals in the United States are awaiting a donor heart; worldwide, 22 million individuals are living with heart failure. A bioartificial heart is a theoretical alternative to transplantation or mechanical left ventricular support. Generating a bioartificial heart requires engineering of cardiac architecture, appropriate cellular constituents and pump function. We decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent acellular valves and intact chamber geometry. To mimic cardiac cell composition, we reseeded these constructs with cardiac or endothelial cells. To establish function, we maintained eight constructs for up to 28 d by coronary perfusion in a bioreactor that simulated cardiac physiology. By day 4, we observed macroscopic contractions. By day 8, under physiological load and electrical stimulation, constructs could generate pump function (equivalent to about 2% of adult or 25% of 16-week fetal heart function) in a modified working heart preparation.

2,454 citations

Journal ArticleDOI
01 Sep 1997
TL;DR: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment.
Abstract: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification. The motivation for this endeavor stems from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment. In particular the biomedical literature suggests that irises are as distinct as fingerprints or patterns of retinal blood vessels. Further, since the iris is an overt body, its appearance is amenable to remote examination with the aid of a machine vision system. The body of this paper details issues in the design and operation of such systems. For the sake of illustration, extant systems are described in some amount of detail.

2,046 citations