scispace - formally typeset
Search or ask a question
Proceedings Article

Image Processing

01 Jan 1994-
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.
Citations
More filters
Proceedings ArticleDOI
01 Sep 2000
TL;DR: A fuzzy edge detection method is used and is based on an improved generalized fuzzy operator that enhances the nuclei and effectively separates the cells from the background.
Abstract: Morphometric assessment of tumor cells is important in the prediction of biological behavior of thyroid cancer. In order to automate the process, the computer-based system has to recognize the boundary of the cells. Many methods for the boundary detection have appeared in the literature and some of them applied to microscopic slice analysis. However, there is no reliable method since the gray-levels in the nuclei are uneven and are similar to the background. In the paper, a fuzzy edge detection method is used and is based on an improved generalized fuzzy operator. The method enhances the nuclei and effectively separates the cells from the background.

19 citations

Journal ArticleDOI
TL;DR: In this article, several methods for the preparation and quantification of coal particles have been proposed to represent past fire events and macro-charcoal particles are used to represent local fire events.
Abstract: Charcoal particles are evidence of past fire events and macro-charcoal particles have been shown to represent local fire events. There are several methods for the preparation and quantification of ...

19 citations


Cites methods from "Image Processing"

  • ...This is possible if several techniques are used such as elimination of non-charcoal material using a low magnification binocular microscope and then careful choice of the threshold value used in the ImageJ programme combined with non-inclusion of particles 0.002 mm2 ( 44.7 µmm length)....

    [...]

  • ...Image analysis, using the free programme ImageJ (imagej. nih.gov/ij), can be used to identify and count charcoal fragments that are down to a few microns in size, although >0.002 mm2 (>44.7 µmm length) is a useful threshold to use to eliminate erroneous groups of pixels....

    [...]

  • ...Measurements of charcoal particle counts were recorded using ImageJ (Abràmoff et al., 2004; Schneider et al., 2012)....

    [...]

  • ...Figure 2 shows examples of the images used in ImageJ and for comparison, images of the charcoal used in the mass quantification method....

    [...]

Journal ArticleDOI
TL;DR: When the most efficient LNA/DNA gapmer was covalently bound to a cell-penetrating peptide, the hybrid compound conserved the EGS activity as determined by RNase P cleavage assays and reduced the levels of resistance to amikacin when added to Acinetobacter baumannii cells in culture, an indication of cellular uptake and biological activity.
Abstract: EGSs (external guide sequences) are short antisense oligoribonucleotides that elicit RNase P-mediated cleavage of a target mRNA, which results in inhibition of gene expression. EGS technology is used to inhibit expression of a wide variety of genes, a strategy that may lead to development of novel treatments of numerous diseases, including multidrug-resistant bacterial and viral infections. Successful development of EGS technology depends on finding nucleotide analogs that resist degradation by nucleases present in biological fluids and the environment but still elicit RNase P-mediated degradation when forming a duplex with a target mRNA. Previous results suggested that locked nucleic acids (LNA)/DNA chimeric oligomers have these properties. LNA are now considered the first generation of compounds collectively known as bridged nucleic acids (BNA), modified ribonucleotides that contain a bridge at the 2',4'-position of the ribose. LNA and the second generation BNA, known as BNANC, differ in the chemical nature of the bridge. Chimeric oligomers containing LNA or BNANC and deoxynucleotide monomers in different configurations are nuclease resistant and could be excellent EGS compounds. However, not all configurations may be equally active as EGSs. RNase P cleavage assays comparing LNA/DNA and BNANC/DNA chimeric oligonucleotides that share identical nucleotide sequence but with different configurations were carried out using as target the amikacin resistance aac(6')-Ib mRNA. LNA/DNA gapmers with 5 and 3/4 LNA residues at the 5'- and 3'-ends, respectively, were the most efficient EGSs while all BNANC/DNA gapmers showed very poor activity. When the most efficient LNA/DNA gapmer was covalently bound to a cell penetrating peptide (CPP), the hybrid compound conserved the EGS activity as determined by RNase P cleavage assays and reduced the levels of resistance to amikacin when added to Acinetobacter baumannii cells in culture, an indication of cellular uptake and biological activity.

19 citations

Proceedings ArticleDOI
01 Oct 2019
TL;DR: The results show that the Backpropagation algorithm (using 12 hidden layer neurons) provides a 93% accuracy rate, while the K-means clustering algorithm presents a 74% accuracy rates.
Abstract: This research presents a comparison study of Backpropagation and K-means clustering algorithms for egg fertility identification. Instead of candling the eggs manually, a smartphone camera is used for capturing an egg image, then we do the pre-processing step by performing image enhancement and gray scaling process. The feature extraction method applied in the pre-processed image is the Gray Level Co-occurrence Matrix (GLCM) with six parameters (Entropy, Angular Second Moment, Contrast, Inverse Different Moment, Correlation, and Variance). The result of GLCM’s feature extraction image will be processed using two learning algorithms: Backpropagation and K-means Clustering. For evaluation, we use 100 data samples (each in training and testing). The results show that the Backpropagation algorithm (using 12 hidden layer neurons) provides a 93% accuracy rate, while the K-means clustering algorithm presents a 74% accuracy rate. Since the Backpropagation algorithm gives better results in detecting egg fertility, as a recommendation, egg fertility identification can be performed using this algorithm.

19 citations


Cites background from "Image Processing"

  • ...Image quality enhancement is one of the pre-processing methods in image processing, which is carried out to improve the quality of an image [29]....

    [...]

References
More filters
Journal ArticleDOI
01 Nov 1973
TL;DR: These results indicate that the easily computable textural features based on gray-tone spatial dependancies probably have a general applicability for a wide variety of image-classification applications.
Abstract: Texture is one of the important characteristics used in identifying objects or regions of interest in an image, whether the image be a photomicrograph, an aerial photograph, or a satellite image. This paper describes some easily computable textural features based on gray-tone spatial dependancies, and illustrates their application in category-identification tasks of three different kinds of image data: photomicrographs of five kinds of sandstones, 1:20 000 panchromatic aerial photographs of eight land-use categories, and Earth Resources Technology Satellite (ERTS) multispecial imagery containing seven land-use categories. We use two kinds of decision rules: one for which the decision regions are convex polyhedra (a piecewise linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89 percent for the photomicrographs, 82 percent for the aerial photographic imagery, and 83 percent for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications.

20,442 citations

Book
03 Oct 1988
TL;DR: This chapter discusses two Dimensional Systems and Mathematical Preliminaries and their applications in Image Analysis and Computer Vision, as well as image reconstruction from Projections and image enhancement.
Abstract: Introduction. 1. Two Dimensional Systems and Mathematical Preliminaries. 2. Image Perception. 3. Image Sampling and Quantization. 4. Image Transforms. 5. Image Representation by Stochastic Models. 6. Image Enhancement. 7. Image Filtering and Restoration. 8. Image Analysis and Computer Vision. 9. Image Reconstruction From Projections. 10. Image Data Compression.

8,504 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
TL;DR: Eight constructs decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent a cellular valves and intact chamber geometry that could generate pump function in a modified working heart preparation.
Abstract: About 3,000 individuals in the United States are awaiting a donor heart; worldwide, 22 million individuals are living with heart failure. A bioartificial heart is a theoretical alternative to transplantation or mechanical left ventricular support. Generating a bioartificial heart requires engineering of cardiac architecture, appropriate cellular constituents and pump function. We decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent acellular valves and intact chamber geometry. To mimic cardiac cell composition, we reseeded these constructs with cardiac or endothelial cells. To establish function, we maintained eight constructs for up to 28 d by coronary perfusion in a bioreactor that simulated cardiac physiology. By day 4, we observed macroscopic contractions. By day 8, under physiological load and electrical stimulation, constructs could generate pump function (equivalent to about 2% of adult or 25% of 16-week fetal heart function) in a modified working heart preparation.

2,454 citations

Journal ArticleDOI
01 Sep 1997
TL;DR: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment.
Abstract: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification. The motivation for this endeavor stems from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment. In particular the biomedical literature suggests that irises are as distinct as fingerprints or patterns of retinal blood vessels. Further, since the iris is an overt body, its appearance is amenable to remote examination with the aid of a machine vision system. The body of this paper details issues in the design and operation of such systems. For the sake of illustration, extant systems are described in some amount of detail.

2,046 citations