scispace - formally typeset
Search or ask a question
Proceedings Article

Image Processing

01 Jan 1994-
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.
Citations
More filters
Book
03 Oct 1988
TL;DR: This chapter discusses two Dimensional Systems and Mathematical Preliminaries and their applications in Image Analysis and Computer Vision, as well as image reconstruction from Projections and image enhancement.
Abstract: Introduction. 1. Two Dimensional Systems and Mathematical Preliminaries. 2. Image Perception. 3. Image Sampling and Quantization. 4. Image Transforms. 5. Image Representation by Stochastic Models. 6. Image Enhancement. 7. Image Filtering and Restoration. 8. Image Analysis and Computer Vision. 9. Image Reconstruction From Projections. 10. Image Data Compression.

8,504 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
TL;DR: Eight constructs decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent a cellular valves and intact chamber geometry that could generate pump function in a modified working heart preparation.
Abstract: About 3,000 individuals in the United States are awaiting a donor heart; worldwide, 22 million individuals are living with heart failure. A bioartificial heart is a theoretical alternative to transplantation or mechanical left ventricular support. Generating a bioartificial heart requires engineering of cardiac architecture, appropriate cellular constituents and pump function. We decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent acellular valves and intact chamber geometry. To mimic cardiac cell composition, we reseeded these constructs with cardiac or endothelial cells. To establish function, we maintained eight constructs for up to 28 d by coronary perfusion in a bioreactor that simulated cardiac physiology. By day 4, we observed macroscopic contractions. By day 8, under physiological load and electrical stimulation, constructs could generate pump function (equivalent to about 2% of adult or 25% of 16-week fetal heart function) in a modified working heart preparation.

2,454 citations

Journal ArticleDOI
01 Sep 1997
TL;DR: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment.
Abstract: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification. The motivation for this endeavor stems from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment. In particular the biomedical literature suggests that irises are as distinct as fingerprints or patterns of retinal blood vessels. Further, since the iris is an overt body, its appearance is amenable to remote examination with the aid of a machine vision system. The body of this paper details issues in the design and operation of such systems. For the sake of illustration, extant systems are described in some amount of detail.

2,046 citations


Cites methods from "Image Processing"

  • ...system makes us of an isotropic bandpass decomposition derived from application of Laplacian of Gaussian filters [25], [29] to the image data....

    [...]

  • ...In practice, the filtered image is realized as a Laplacian pyramid [8], [29]....

    [...]

Journal ArticleDOI
TL;DR: This paper identifies some promising techniques for image retrieval according to standard principles and examines implementation procedures for each technique and discusses its advantages and disadvantages.

1,910 citations


Cites background or methods from "Image Processing"

  • ...Structural description of chromosome shape (reprinted from [14])....

    [...]

  • ...Common invariants include (i) geometric invariants such as cross-ratio, length ratio, distance ratio, angle, area [69], triangle [70], invariants from coplanar points [14]; (ii) algebraic invariants such as determinant, eigenvalues [71], trace [14]; (iii) di<erential invariants such as curvature, torsion and Gaussian curvature....

    [...]

  • ...Designers of shape invariants argue that although most of other shape representation techniques are invariant under similarity transformations (rotation, translation and scaling), they depend on viewpoint [14]....

    [...]

  • ...The extracting of the convex hull can use both boundary tracing method [14] and morphological methods [11,15]....

    [...]

  • ...Assuming the shape boundary has been represented as a shape signature z(i), the rth moment mr and central moment r can be estimated as [14]...

    [...]

References
More filters
Journal ArticleDOI
12 Feb 2020-Nature
TL;DR: P phenotypic selection analysis is used to estimate the type and strength of selection that acts on more than 15,000 transcripts in rice ( Oryza sativa), which provides insight into the adaptive evolutionary role of selection on gene expression.
Abstract: Levels of gene expression underpin organismal phenotypes1,2, but the nature of selection that acts on gene expression and its role in adaptive evolution remain unknown1,2. Here we assayed gene expression in rice (Oryza sativa)3, and used phenotypic selection analysis to estimate the type and strength of selection on the levels of more than 15,000 transcripts4,5. Variation in most transcripts appears (nearly) neutral or under very weak stabilizing selection in wet paddy conditions (with median standardized selection differentials near zero), but selection is stronger under drought conditions. Overall, more transcripts are conditionally neutral (2.83%) than are antagonistically pleiotropic6 (0.04%), and transcripts that display lower levels of expression and stochastic noise7–9 and higher levels of plasticity9 are under stronger selection. Selection strength was further weakly negatively associated with levels of cis-regulation and network connectivity9. Our multivariate analysis suggests that selection acts on the expression of photosynthesis genes4,5, but that the efficacy of selection is genetically constrained under drought conditions10. Drought selected for earlier flowering11,12 and a higher expression of OsMADS18 (Os07g0605200), which encodes a MADS-box transcription factor and is a known regulator of early flowering13—marking this gene as a drought-escape gene11,12. The ability to estimate selection strengths provides insights into how selection can shape molecular traits at the core of gene action. Phenotypic selection analysis is used to estimate the type and strength of selection that acts on more than 15,000 transcripts in rice (Oryza sativa), which provides insight into the adaptive evolutionary role of selection on gene expression.

87 citations

Journal ArticleDOI
TL;DR: A tool to aid researchers in the analysis of confocal images to remove subjectivity from the resulting data sets and facilitate higher-throughput, quantitative approaches to plant cell research is presented.
Abstract: It is increasingly important in life sciences that many cell-scale and tissue-scale measurements are quantified from confocal microscope images. However, extracting and analyzing large-scale confocal image data sets represents a major bottleneck for researchers. To aid this process, CellSeT software has been developed, which utilizes tissue-scale structure to help segment individual cells. We provide examples of how the CellSeT software can be used to quantify fluorescence of hormone-responsive nuclear reporters, determine membrane protein polarity, extract cell and tissue geometry for use in later modeling, and take many additional biologically relevant measures using an extensible plug-in toolset. Application of CellSeT promises to remove subjectivity from the resulting data sets and facilitate higher-throughput, quantitative approaches to plant cell research.

86 citations

Journal ArticleDOI
TL;DR: A computer-aided approach to segmenting suspicious lesions in digital mammograms, based on a novel maximum likelihood active contour model using level sets (MLACMLS), which is shown to be robust to the selection of a required single seed point.

86 citations

Journal ArticleDOI
TL;DR: The tissue localization of AgNHA1 suggests that it plays a key role in maintaining the characteristic longitudinal pH gradient in the lumen of the alimentary canal of An.
Abstract: SUMMARY We have cloned a cDNA encoding a new ion transporter from the alimentary canal of larval African malaria mosquito, Anopheles gambiae Giles sensu stricto . Phylogenetic analysis revealed that the corresponding gene is in a group that has been designated NHA, and which includes (Na + or K + )/H + antiporters; so the novel transporter is called AgNHA1. The annotation of current insect genomes shows that both AgNHA1 and a close relative, AgNHA2, belong to the cation proton antiporter 2 (CPA2) subfamily and cluster in an exclusive clade of genes with high identity from Aedes aegypti , Drosophila melanogaster , D. pseudoobscura , Apis mellifera and Tribolium castaneum . Although NHA genes have been identified in all phyla for which genomes are available, no NHA other than AgNHA1 has previously been cloned, nor have the encoded proteins been localized or characterized. The AgNHA1 transcript was localized in An. gambiae larvae by quantitative real-time PCR (qPCR) and in situ hybridization. AgNHA1 message was detected in gastric caeca and rectum, with much weaker transcription in other parts of the alimentary canal. Immunolabeling of whole mounts and longitudinal sections of isolated alimentary canal showed that AgNHA1 is expressed in the cardia, gastric caeca, anterior midgut, posterior midgut, proximal Malpighian tubules and rectum, as well as in the subesophageal and abdominal ganglia. A phylogenetic analysis of NHAs and KHAs indicates that they are ubiquitous. A comparative molecular analysis of these antiporters suggests that they catalyze electrophoretic alkali metal ion/hydrogen ion exchanges that are driven by the voltage from electrogenic H + V-ATPases. The tissue localization of AgNHA1 suggests that it plays a key role in maintaining the characteristic longitudinal pH gradient in the lumen of the alimentary canal of An. gambiae larvae.

86 citations

Journal ArticleDOI
TL;DR: A kernels-alternated error diffusion (KAEDF) technique is proposed for embedding watermarks into error-diffused images, and the correct decoding rates for both techniques are high and extremely robust, even after printing and scanning processes.
Abstract: A low computational complexity noise-balanced error diffusion (NBEDF) technique is proposed for embedding watermarks into error-diffused images. The visual decoding pattern can be perceived when two or more similar NBEDF images are overlaid, even in a high activity region. Also, with the modified improved version of NBEDF, two halftone images can be made from two totally different gray-tone images, and still provide a clear and sharp visual decoding pattern. With self-decoding techniques, we can also decode the pattern with only one NBEDF image. However, the NBEDF method is not so robust to damage due to printing or other distortions. Thus, a kernels-alternated error diffusion (KAEDF) technique is proposed. By using them alternately in the halftone process, we find that two well-known kernels (Jarvis, J.F. et al., 1976; Stucki, P., 1981) are compatible. In the decoder, because the spectral distributions of Jarvis and Stucki kernels are different in the 2D fast Fourier transform domain, we use the cumulative squared Euclidean distance criterion to determine each cell in a watermarked halftone image belonging to either Jarvis or Stucki, and then decode the watermark. Furthermore, because the detailed textures of Jarvis and Stucki patterns are somewhat different in the spatial domain, the lookup table (LUT) technique is also used for fast decoding. From simulation results, the correct decoding rates for both techniques are high and extremely robust, even after printing and scanning processes. Finally, we extend the hybrid NBEDF and KAEDF algorithms to two color EDF halftone images, where 8 independent KAEDF watermarks and 16 NBEDF watermarks can be inserted and still achieve a high-quality result.

86 citations