scispace - formally typeset
Search or ask a question
Proceedings Article

Image Processing

01 Jan 1994-
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.
Citations
More filters
Book
03 Oct 1988
TL;DR: This chapter discusses two Dimensional Systems and Mathematical Preliminaries and their applications in Image Analysis and Computer Vision, as well as image reconstruction from Projections and image enhancement.
Abstract: Introduction. 1. Two Dimensional Systems and Mathematical Preliminaries. 2. Image Perception. 3. Image Sampling and Quantization. 4. Image Transforms. 5. Image Representation by Stochastic Models. 6. Image Enhancement. 7. Image Filtering and Restoration. 8. Image Analysis and Computer Vision. 9. Image Reconstruction From Projections. 10. Image Data Compression.

8,504 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
TL;DR: Eight constructs decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent a cellular valves and intact chamber geometry that could generate pump function in a modified working heart preparation.
Abstract: About 3,000 individuals in the United States are awaiting a donor heart; worldwide, 22 million individuals are living with heart failure. A bioartificial heart is a theoretical alternative to transplantation or mechanical left ventricular support. Generating a bioartificial heart requires engineering of cardiac architecture, appropriate cellular constituents and pump function. We decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent acellular valves and intact chamber geometry. To mimic cardiac cell composition, we reseeded these constructs with cardiac or endothelial cells. To establish function, we maintained eight constructs for up to 28 d by coronary perfusion in a bioreactor that simulated cardiac physiology. By day 4, we observed macroscopic contractions. By day 8, under physiological load and electrical stimulation, constructs could generate pump function (equivalent to about 2% of adult or 25% of 16-week fetal heart function) in a modified working heart preparation.

2,454 citations

Journal ArticleDOI
01 Sep 1997
TL;DR: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment.
Abstract: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification. The motivation for this endeavor stems from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment. In particular the biomedical literature suggests that irises are as distinct as fingerprints or patterns of retinal blood vessels. Further, since the iris is an overt body, its appearance is amenable to remote examination with the aid of a machine vision system. The body of this paper details issues in the design and operation of such systems. For the sake of illustration, extant systems are described in some amount of detail.

2,046 citations


Cites methods from "Image Processing"

  • ...system makes us of an isotropic bandpass decomposition derived from application of Laplacian of Gaussian filters [25], [29] to the image data....

    [...]

  • ...In practice, the filtered image is realized as a Laplacian pyramid [8], [29]....

    [...]

Journal ArticleDOI
TL;DR: This paper identifies some promising techniques for image retrieval according to standard principles and examines implementation procedures for each technique and discusses its advantages and disadvantages.

1,910 citations


Cites background or methods from "Image Processing"

  • ...Structural description of chromosome shape (reprinted from [14])....

    [...]

  • ...Common invariants include (i) geometric invariants such as cross-ratio, length ratio, distance ratio, angle, area [69], triangle [70], invariants from coplanar points [14]; (ii) algebraic invariants such as determinant, eigenvalues [71], trace [14]; (iii) di<erential invariants such as curvature, torsion and Gaussian curvature....

    [...]

  • ...Designers of shape invariants argue that although most of other shape representation techniques are invariant under similarity transformations (rotation, translation and scaling), they depend on viewpoint [14]....

    [...]

  • ...The extracting of the convex hull can use both boundary tracing method [14] and morphological methods [11,15]....

    [...]

  • ...Assuming the shape boundary has been represented as a shape signature z(i), the rth moment mr and central moment r can be estimated as [14]...

    [...]

References
More filters
Journal ArticleDOI
01 Dec 2016
TL;DR: A cheap framework for fast image processing in real-time including feature extraction and image transformation methods, frequently used not only in robotics are designed and tested on the Raspberry Pi 2 equipped with a native camera board.
Abstract: We have designed a small robot called Cube for simple robotics tasks and students works. The Cube carries only small and lightweight computational platform Raspberry Pi equipped with a camera. This paper serves as an introduction to using image processing methods on a Raspberry Pi platform via Simulink in Matlab. We have designed a cheap framework for fast image processing in real-time including feature extraction and image transformation methods. In the paper a several selected methods frequently used not only in robotics are implemented and tested on the Raspberry Pi 2 equipped with a native camera board. Algorithms for edge, corner and line detection have been implemented by using Simulink with the Computer Vision System Toolbox and other in-build tools.

16 citations

Journal ArticleDOI
TL;DR: The visual quality and PSNR of the decoded images are much improved by using the proposed adaptive thresholding technique comparing with the traditional MPEG-4 still texture image codec in coding noisy images.
Abstract: This paper describes the performance of the MPEG-4 still texture image codec in coding noisy images. As will be shown, when using the MPEG-4 still texture image codec to compress a noisy image, increasing the compression rate does not necessarily imply reducing the peak-signal-to-noise ratio (PSNR) of the decoded image. An optimal operating point having the highest PSNR can be obtained within the low bit rate region. Nevertheless, the visual quality of the decoded noisy image at this optimal operating point is greatly degraded by the so-called "cross" shape artifact. In this paper, we analyze the reason for the existence of the optimal operating point and the "cross" shape artifact when using the MPEG-4 still texture image codec to compress noisy images. We then propose an adaptive thresholding technique to remove the "cross" shape artifact of the decoded images. It requires only a slight modification to the quantization process of the traditional MPEG-4 encoder while the decoder remains unchanged. Finally, an analytical study is performed for the selection and validation of the threshold value used in the adaptive thresholding technique. It is shown that, the visual quality and PSNR of the decoded images are much improved by using the proposed technique comparing with the traditional MPEG-4 still texture image codec in coding noisy images.

16 citations

Journal ArticleDOI
Bruce Maxwell1
TL;DR: The survey shows that, in addition to classic survey courses in CV/IP, there are many focused and multidisciplinary courses being taught that reportedly improve both student and faculty interest in the topic.
Abstract: This paper provides a survey of the variety of computer vision [CV] and image processing [IP] courses being taught at institutions around the world. The survey shows that, in addition to classic survey courses in CV/IP, there are many focused and multidisciplinary courses being taught that reportedly improve both student and faculty interest in the topic. It also demonstrates that students can successfully undertake a variety of complex lab assignments. In addition, this paper includes a comparative review of current textbooks and supplemental texts appropriate for CV/IP courses.

16 citations

Posted Content
TL;DR: The proposed algorithm first estimates image blur and then compensates for it by combining multiple applications of the estimated blur in a principled way, leading to superior results than other highly complex and computationally demanding techniques.
Abstract: We present a highly efficient blind restoration method to remove mild blur in natural images. Contrary to the mainstream, we focus on removing slight blur that is often present, damaging image quality and commonly generated by small out-of-focus, lens blur, or slight camera motion. The proposed algorithm first estimates image blur and then compensates for it by combining multiple applications of the estimated blur in a principled way. To estimate blur we introduce a simple yet robust algorithm based on empirical observations about the distribution of the gradient in sharp natural images. Our experiments show that, in the context of mild blur, the proposed method outperforms traditional and modern blind deblurring methods and runs in a fraction of the time. Our method can be used to blindly correct blur before applying off-the-shelf deep super-resolution methods leading to superior results than other highly complex and computationally demanding techniques. The proposed method estimates and removes mild blur from a 12MP image on a modern mobile phone in a fraction of a second.

16 citations

Journal ArticleDOI
TL;DR: In this article, the authors present the first data on crystallization kinetics of alkali feldspar, which is the main crystal phase in peralkaline rhyolitic melts, in order to improve our understanding of the evolutionary timescales of these melts and their ability to shift between effusive and explosive activity.
Abstract: Peralkaline rhyolites, associated with extensional tectonic settings, are medium to low viscosity magmas that often produce eruptive styles ranging from effusive to highly explosive eruptions. The role of pre-eruptive conditions and crystallization kinetics in influencing the eruptive style of peralkaline rhyolitic magmas has been investigated and debated considering equilibrium conditions. However, experimental constraints on the effect of disequilibrium in crystallization in such magmas are currently lacking in the literature. Therefore, we performed isobaric cooling experiments to investigate alkali feldspar crystallization kinetics in peralkaline rhyolitic melts. Experiments were performed under water-saturated, water-undersaturated and anhydrous conditions between 25 and 100 MPa, at 670-790 °C and with experimental durations ranging from 0.5 to 420 hours. Here we present the first data on crystallization kinetics of alkali feldspar, which is the main crystal phase in peralkaline rhyolitic melts, in order to improve our understanding of the evolutionary timescales of these melts and their ability to shift between effusive and explosive activity. Our experimental results indicate that the alkali feldspar nucleation delay can range from hours to several days as a function of undercooling and H2O content in the melt. Thus, a peralkaline rhyolitic magma can be stored at the pre-eruptive conditions for days without important variations of its crystal fraction. This suggests that crystallization may not necessarily play the main role in triggering fragmentation during explosive eruptions of peralkaline rhyolitic magmas.

16 citations