scispace - formally typeset
Search or ask a question
Proceedings Article

Image Processing

01 Jan 1994-
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.
Citations
More filters
Proceedings ArticleDOI
25 Apr 2001
TL;DR: The paper collects seven methods applicable for vessel segmentation acquired by computer tomography angiography (CTA) of the human leg that simultaneously preserves the vessel calcification and allows localization of vessel narrowings.
Abstract: In this paper, we describe the results of the literature re-viewfocused on the peripheral vessel segmentation in 3Dmedical datasets, acquired by Computer tomography angiography(CTA) of the human leg.The fundamental aim of such a segmentation task is arobust method for the detection of main vessels in the legthat simultaneously preserves the vessel calcification (thesediment is called plaque) and allows localization of vesselnarrowings (called stenoses). This segmentation has to befree from artifacts, i.e., without false detections of stenosesand without false omitting of any stenotic part. The papercollects seven methods applicable for vessel segmentation.

59 citations


Cites background from "Image Processing"

  • ...Figure 2: Schematic cross-sectional anatomy of a diseased coronary vessel [23]...

    [...]

Journal ArticleDOI
TL;DR: The findings demonstrate that the perception of beauty in abstract artworks is altered after exposure to beautiful or non-beautiful images and correlates with particular image properties, especially color measures and self-similarity.
Abstract: In this study, we combined the behavioral and objective approach in the field of empirical aesthetics. First, we studied the perception of beauty by investigating shifts in evaluation on perceived beauty of abstract artworks (Experiment 1). Because the participants showed heterogeneous individual preferences for the paintings, we divided them into seven clusters for the test. The experiment revealed a clear pattern of perceptual contrast. The perceived beauty of abstract paintings increased after exposure to paintings that were rated as less beautiful, and it decreased after exposure to paintings that were rated as more beautiful. Next, we searched for correlations of beauty ratings and perceptual contrast with statistical properties of abstract artworks (Experiment 2). The participants showed significant preferences for certain image properties. These preferences differed between the clusters of participants. Strikingly, next to color measures like hue, saturation, value and lightness, the recently described PHOG self-similarity value seems to be a predictor for aesthetic appreciation of abstract artworks. We speculate that the shift in evaluation in Experiment 1 was, at least in part, based on low-level adaptation to some of the statistical image properties analyzed in Experiment 2. In conclusion, our findings demonstrate that the perception of beauty in abstract artworks is altered after exposure to beautiful or non-beautiful images and correlates with particular image properties, especially color measures and self-similarity.

59 citations

Journal ArticleDOI
TL;DR: The overall aim is to establish formally the suitability of the procedure of edge detection in digital images, as a step prior to segmentation, by means of the Jensen-Shannon divergence.
Abstract: This work constitutes a theoretical study of the edge-detection method by means of the Jensen-Shannon divergence, as proposed by the authors. The overall aim is to establish formally the suitability of the procedure of edge detection in digital images, as a step prior to segmentation. In specific, an analysis is made not only of the properties of the divergence used, but also of the method's sensitivity to the spatial variation, as well as the detection-error risk associated with the operating conditions due to the randomness of the spatial configuration of the pixels. Although the paper deals with the procedure based on the Jensen-Shannon divergence, some problems are also related to other methods based on local detection with a sliding window, and part of the study is focused to noisy and textured images.

58 citations

Journal ArticleDOI
TL;DR: The findings support the general conclusion that genes involved in protein metabolism and feeding regulation are key regulators of growth, and provide a set of candidate biomarkers for predicting differential growth rates during animal development.
Abstract: Growth rates in animals are governed by a wide range of biological factors, many of which remain poorly understood. To identify the genes that establish growth differences in bivalve larvae, we compared expression patterns in contrasting phenotypes (slow- and fast-growth) that were experimentally produced by genetic crosses of the Pacific oyster Crassostrea gigas . Based on transcriptomic profiling of 4.5 million cDNA sequence tags, we sequenced and annotated 181 cDNA clones identified by statistical analysis as candidates for differential growth. Significant matches were found in GenBank for 43% of clones ( N =78), including 34 known genes. These sequences included genes involved in protein metabolism, energy metabolism and regulation of feeding activity. Ribosomal protein genes were predominant, comprising half of the 34 genes identified. Expression of ribosomal protein genes showed non-additive inheritance — i.e. expression in fast-growing hybrid larvae was different from average levels in inbred larvae from these parental families. The expression profiles of four ribosomal protein genes ( RPL18 , RPL31 , RPL352 and RPS3 ) were validated by RNA blots using additional, independent crosses from the same families. Expression of RPL35 was monitored throughout early larval development, revealing that these expression patterns were established early in development (in 2-day-old larvae). Our findings (i) provide new insights into the mechanistic bases of growth and highlight genes not previously considered in growth regulation, (ii) support the general conclusion that genes involved in protein metabolism and feeding regulation are key regulators of growth, and (iii) provide a set of candidate biomarkers for predicting differential growth rates during animal development.

58 citations


Cites methods from "Image Processing"

  • ...The intensity of 18S rRNA bands was measured by staining with ethidium bromide and quantification of digital photographic images using ImageJ (NIH) (Abramoff et al., 2004)....

    [...]

Proceedings ArticleDOI
01 Jan 2019
TL;DR: Wang et al. as discussed by the authors proposed a scalable Locality-Constrained Projective Dictionary Learning (LC-PDL) for efficient representation and classification, which incorporates a locality constraint of atoms into DL procedures to keep local information and obtain the codes of samples over each class separately.
Abstract: We propose a novel structured discriminative block-diagonal dictionary learning method, referred to as scalable Locality-Constrained Projective Dictionary Learning (LC-PDL), for efficient representation and classification. To improve the scalability by saving both training and testing time, our LC-PDL aims at learning a structured discriminative dictionary and a block-diagonal representation without using costly l0/l1-norm. Besides, it avoids extra time-consuming sparse reconstruction process with the well-trained dictionary for new sample as many existing models. More importantly, LC-PDL avoids using the complementary data matrix to learn the sub-dictionary over each class. To enhance the performance, we incorporate a locality constraint of atoms into the DL procedures to keep local information and obtain the codes of samples over each class separately. A block-diagonal discriminative approximation term is also derived to learn a discriminative projection to bridge data with their codes by extracting the special block-diagonal features from data, which can ensure the approximate coefficients to associate with its label information clearly. Then, a robust multiclass classifier is trained over extracted block-diagonal codes for accurate label predictions. Experimental results verify the effectiveness of our algorithm.

58 citations

References
More filters
Journal ArticleDOI
01 Nov 1973
TL;DR: These results indicate that the easily computable textural features based on gray-tone spatial dependancies probably have a general applicability for a wide variety of image-classification applications.
Abstract: Texture is one of the important characteristics used in identifying objects or regions of interest in an image, whether the image be a photomicrograph, an aerial photograph, or a satellite image. This paper describes some easily computable textural features based on gray-tone spatial dependancies, and illustrates their application in category-identification tasks of three different kinds of image data: photomicrographs of five kinds of sandstones, 1:20 000 panchromatic aerial photographs of eight land-use categories, and Earth Resources Technology Satellite (ERTS) multispecial imagery containing seven land-use categories. We use two kinds of decision rules: one for which the decision regions are convex polyhedra (a piecewise linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89 percent for the photomicrographs, 82 percent for the aerial photographic imagery, and 83 percent for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications.

20,442 citations

Book
03 Oct 1988
TL;DR: This chapter discusses two Dimensional Systems and Mathematical Preliminaries and their applications in Image Analysis and Computer Vision, as well as image reconstruction from Projections and image enhancement.
Abstract: Introduction. 1. Two Dimensional Systems and Mathematical Preliminaries. 2. Image Perception. 3. Image Sampling and Quantization. 4. Image Transforms. 5. Image Representation by Stochastic Models. 6. Image Enhancement. 7. Image Filtering and Restoration. 8. Image Analysis and Computer Vision. 9. Image Reconstruction From Projections. 10. Image Data Compression.

8,504 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
TL;DR: Eight constructs decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent a cellular valves and intact chamber geometry that could generate pump function in a modified working heart preparation.
Abstract: About 3,000 individuals in the United States are awaiting a donor heart; worldwide, 22 million individuals are living with heart failure. A bioartificial heart is a theoretical alternative to transplantation or mechanical left ventricular support. Generating a bioartificial heart requires engineering of cardiac architecture, appropriate cellular constituents and pump function. We decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent acellular valves and intact chamber geometry. To mimic cardiac cell composition, we reseeded these constructs with cardiac or endothelial cells. To establish function, we maintained eight constructs for up to 28 d by coronary perfusion in a bioreactor that simulated cardiac physiology. By day 4, we observed macroscopic contractions. By day 8, under physiological load and electrical stimulation, constructs could generate pump function (equivalent to about 2% of adult or 25% of 16-week fetal heart function) in a modified working heart preparation.

2,454 citations

Journal ArticleDOI
01 Sep 1997
TL;DR: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment.
Abstract: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification. The motivation for this endeavor stems from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment. In particular the biomedical literature suggests that irises are as distinct as fingerprints or patterns of retinal blood vessels. Further, since the iris is an overt body, its appearance is amenable to remote examination with the aid of a machine vision system. The body of this paper details issues in the design and operation of such systems. For the sake of illustration, extant systems are described in some amount of detail.

2,046 citations