scispace - formally typeset
Search or ask a question
Proceedings Article

Image Processing

01 Jan 1994-
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.
Citations
More filters
Book
03 Oct 1988
TL;DR: This chapter discusses two Dimensional Systems and Mathematical Preliminaries and their applications in Image Analysis and Computer Vision, as well as image reconstruction from Projections and image enhancement.
Abstract: Introduction. 1. Two Dimensional Systems and Mathematical Preliminaries. 2. Image Perception. 3. Image Sampling and Quantization. 4. Image Transforms. 5. Image Representation by Stochastic Models. 6. Image Enhancement. 7. Image Filtering and Restoration. 8. Image Analysis and Computer Vision. 9. Image Reconstruction From Projections. 10. Image Data Compression.

8,504 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
TL;DR: Eight constructs decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent a cellular valves and intact chamber geometry that could generate pump function in a modified working heart preparation.
Abstract: About 3,000 individuals in the United States are awaiting a donor heart; worldwide, 22 million individuals are living with heart failure. A bioartificial heart is a theoretical alternative to transplantation or mechanical left ventricular support. Generating a bioartificial heart requires engineering of cardiac architecture, appropriate cellular constituents and pump function. We decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent acellular valves and intact chamber geometry. To mimic cardiac cell composition, we reseeded these constructs with cardiac or endothelial cells. To establish function, we maintained eight constructs for up to 28 d by coronary perfusion in a bioreactor that simulated cardiac physiology. By day 4, we observed macroscopic contractions. By day 8, under physiological load and electrical stimulation, constructs could generate pump function (equivalent to about 2% of adult or 25% of 16-week fetal heart function) in a modified working heart preparation.

2,454 citations

Journal ArticleDOI
01 Sep 1997
TL;DR: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment.
Abstract: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification. The motivation for this endeavor stems from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment. In particular the biomedical literature suggests that irises are as distinct as fingerprints or patterns of retinal blood vessels. Further, since the iris is an overt body, its appearance is amenable to remote examination with the aid of a machine vision system. The body of this paper details issues in the design and operation of such systems. For the sake of illustration, extant systems are described in some amount of detail.

2,046 citations


Cites methods from "Image Processing"

  • ...system makes us of an isotropic bandpass decomposition derived from application of Laplacian of Gaussian filters [25], [29] to the image data....

    [...]

  • ...In practice, the filtered image is realized as a Laplacian pyramid [8], [29]....

    [...]

Journal ArticleDOI
TL;DR: This paper identifies some promising techniques for image retrieval according to standard principles and examines implementation procedures for each technique and discusses its advantages and disadvantages.

1,910 citations


Cites background or methods from "Image Processing"

  • ...Structural description of chromosome shape (reprinted from [14])....

    [...]

  • ...Common invariants include (i) geometric invariants such as cross-ratio, length ratio, distance ratio, angle, area [69], triangle [70], invariants from coplanar points [14]; (ii) algebraic invariants such as determinant, eigenvalues [71], trace [14]; (iii) di<erential invariants such as curvature, torsion and Gaussian curvature....

    [...]

  • ...Designers of shape invariants argue that although most of other shape representation techniques are invariant under similarity transformations (rotation, translation and scaling), they depend on viewpoint [14]....

    [...]

  • ...The extracting of the convex hull can use both boundary tracing method [14] and morphological methods [11,15]....

    [...]

  • ...Assuming the shape boundary has been represented as a shape signature z(i), the rth moment mr and central moment r can be estimated as [14]...

    [...]

References
More filters
Book ChapterDOI
20 Jul 2003
TL;DR: Clinical effectiveness of the method was demonstrated on 62 clinical burn wound images obtained from digital colour photographs, yielding an average classification success rate of 82% compared to expert classified images.
Abstract: In this paper a new system for burn diagnosis is proposed. The aim of the system is to separate burn wounds from healthy skin, and the different types of burns (burn depths) from each other, identifying each one. The system is based on the colour and texture information, as these are the characteristics observed by physicians in order to give a diagnosis. We use a perceptually uniform colour space (L*u*v*), since Euclidean distances calculated in this space correspond to perceptually colour differences. After the burn is segmented, some colour and texture descriptors are calculated and they are the inputs to a Fuzzy-ARTMAP neural network. The neural network classifies them into three types of burns: superficial dermal, deep dermal and full thickness. Clinical effectiveness of the method was demonstrated on 62 clinical burn wound images obtained from digital colour photographs, yielding an average classification success rate of 82 % compared to expert classified images.

18 citations

Journal ArticleDOI
29 Jul 2011-PLOS ONE
TL;DR: Two Sema3A-derived peptides homologous to the peptides isolated by phage display blocked sAPPα binding and its inhibitory action on Sema 3A function, suggesting a competitive mechanism by which sAPP α modulates the biological action of semaphorins.
Abstract: The amyloid precursor protein (APP) is well known for giving rise to the amyloid-β peptide and for its role in Alzheimer's disease. Much less is known, however, on the physiological roles of APP in the development and plasticity of the central nervous system. We have used phage display of a peptide library to identify high-affinity ligands of purified recombinant human sAPPα695 (the soluble, secreted ectodomain from the main neuronal APP isoform). Two peptides thus selected exhibited significant homologies with the conserved extracellular domain of several members of the semaphorin (Sema) family of axon guidance proteins. We show that sAPPα695 binds both purified recombinant Sema3A and Sema3A secreted by transfected HEK293 cells. Interestingly, sAPPα695 inhibited the collapse of embryonic chicken (Gallus gallus domesticus) dorsal root ganglia growth cones promoted by Sema3A (Kd≤8·10−9 M). Two Sema3A-derived peptides homologous to the peptides isolated by phage display blocked sAPPα binding and its inhibitory action on Sema3A function. These two peptides are comprised within a domain previously shown to be involved in binding of Sema3A to its cellular receptor, suggesting a competitive mechanism by which sAPPα modulates the biological action of semaphorins.

18 citations

Journal ArticleDOI
TL;DR: The results indicate that the optical method has high speed due to parallel processing and the best choice lies in an analog-digital combination, while the digital method has the advantages of high processing precision and programmability, but has low processing speed.
Abstract: The effectiveness and limitations of medical image processing using analog and digital methods are studied. Several types of errors introduced during the image processing are analyzed. For the analog optical Fourier transform, errors are introduced by the vignetting effect and lens aberration. For the digital Fourier transform, errors are introduced by the aliasing effect and the band limit. To compare the results obtained by the two techniques, a set of x-ray images was processed both optically and digitally. The former was achieved by an optical system containing a large Fourier telephoto lens and the latter by a personal computer using a Fourier transform algorithm. The veracity of both the optical and digital Fourier spectra is analyzed. Our results indicate that the optical method has high speed due to parallel processing. High veracity can be achieved in high frequency regions by using an optimal optical system. In comparison, the digital method has the advantages of high processing precision and programmability, but has low processing speed. The comparison of the two different techniques presented in this article can provide a basis for selection of the processing method in different clinical settings. Even with today's fast computers, the optical method is still suitable for many clinical applications. The best choice lies in an analog-digital combination.

18 citations

Proceedings ArticleDOI
06 Jun 2009
TL;DR: The implementation of a single MATLAB function is realized for skeleton computation, and the speed performance of the new function is considerably higher than that of the function provided in the MATLAB Image Processing Toolbox.
Abstract: Skeleton is an important shape property and has a variety of applications The skeletonization implementation provided in the MATLAB Image Processing Toolbox is often used, but its execution speed is not satisfactory The implementation is introduced and analyzed in the paper, and performance bottlenecks are pointed out Optimizations are proposed for all these bottlenecks As the result of the optimization, a single MATLAB function is realized for skeleton computation Results of experiments on a test image set show that the optimizations are effective, and the speed performance of the new function is considerably higher than that of the function provided in the MATLAB Image Processing Toolbox

18 citations

Posted ContentDOI
11 Jan 2020-bioRxiv
TL;DR: The findings show that human shape perception is inherently multidimensional and optimized for comparing natural shapes, and can also be used to generate perceptually uniform stimulus sets, making it a powerful tool for investigating shape and object representations in the human brain.
Abstract: Shape is a defining feature of objects. Yet, no image-computable model accurately predicts how similar or different shapes appear to human observers. To address this, we developed a model (ShapeComp), based on over 100 shape features (e.g., area, compactness, Fourier descriptors). When trained to capture the variance in a database of >25,000 animal silhouettes, ShapeComp predicts human shape similarity judgments almost perfectly (r2>0.99) without fitting any parameters to human data. To test the model, we created carefully selected arrays of complex novel shapes using a Generative Adversarial Network trained on the animal silhouettes, which we presented to observers in a wide range of tasks. Our findings show that human shape perception is inherently multidimensional and optimized for comparing natural shapes. ShapeComp outperforms conventional metrics, and can also be used to generate perceptually uniform stimulus sets, making it a powerful tool for investigating shape and object representations in the human brain.

18 citations