scispace - formally typeset
Search or ask a question
Proceedings Article

Image Processing

01 Jan 1994-
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.
Citations
More filters
Proceedings ArticleDOI
29 Apr 2001
TL;DR: A two-level compilation scheme used for generating high-speed binary image morphology pipelines from a textual description of the algorithm is discussed, able to avoid long synthesis times and achieve compile times similar to software compile times, while still achieving a 10X speed-up over the software implementation.
Abstract: This paper discusses a two-level compilation scheme used for generating high-speed binary image morphology pipelines from a textual description of the algorithm. The first-level compiler generates a generic morphology machine which is customized for the specified set of instructions by the second-level compiler. Because the generic machine is reused, we are able to avoid long synthesis times and achieve compile times similar to software compile times, while still achieving a 10X speed-up over the software implementation.

16 citations


Cites background from "Image Processing"

  • ...For a more detailed discussion of binary morphology, please refer to [3] or [4]....

    [...]

Proceedings ArticleDOI
22 May 2003
TL;DR: In this paper, a methodology for automatic high-resolution satellite image georeferencing is proposed, taking advantage of existing digital orthophoto-maps and photo-planes derived from aerial photogrammetry.
Abstract: A large variety of satellite images are available for almost every land of the world. Thanks to the high resolution imagery, also urban areas may take advantage of this abundance of data. In fact, studying and monitoring phenomena concerning these areas require a high accuracy of details, which is attainable. However, any applications of this kind involves an accurate georeferencing of the images themselves to a given geodetic reference system. In this paper, a methodology for automatic high resolution satellite image georeferencing is proposed, taking advantage of existing digital orthophoto-maps and photo-planes derived from aerial photogrammetry. A case study applied to the historical urban area of Venezia is described.

16 citations

01 Jan 2011
TL;DR: The ultrasound screening of placenta in the initial stages of gestation helps to identify the complication induced by GDM on the placental development which accounts for the fetal growth.
Abstract: Medical diagnosis is the major challenge faced by the medical experts. Highly specialized tools are necessary to assist the experts in diagnosing the diseases. Gestational Diabetes Mellitus is a condition in pregnant women which increases the blood sugar levels. It complicates the pregnancy by affecting the placental growth. The ultrasound screening of placenta in the initial stages of gestation helps to identify the complication induced by GDM on the placental development which accounts for the fetal growth. This work focus on the classification of ultrasound placenta images into normal and abnormal images based on statistical measurements. The ultrasound images are usually low in resolution which may lead to loss of characteristic features of the ultrasound images. The placenta images obtained in an ultrasound examination is stereo mapped to reconstruct the placenta structure from the ultrasound images. The dimensionality reduction is done on stereo mapped placenta images using wavelet decomposition. The ultrasound placenta image is segmented using watershed approach to obtain the statistical measurements of the stereo mapped placenta images. Using the statistical measurements, the ultrasound placenta images are then classified as normal and abnormal using Back Propagation neural networks.

16 citations


Cites methods from "Image Processing"

  • ...The binary image is obtained with useful details represented as 1 and others represented as 0 [17]....

    [...]

Journal ArticleDOI
TL;DR: The data reveal that epithelial cells can disseminate while retaining competence to adhere and proliferate, and Twist1+ epithelium retains intercellular junctions and proliferative capacity.
Abstract: Dissemination is the process by which cells detach and migrate away from a multicellular tissue. The epithelial-to-mesenchymal transition (EMT) conceptualizes dissemination in a stepwise fashion, with downregulation of E-cadherin leading to loss of intercellular junctions, induction of motility, and then escape from the epithelium. This gain of migratory activity is proposed to be mutually exclusive with proliferation. We previously developed a dissemination assay based on inducible expression of the transcription factor Twist1 and here utilize it to characterize the timing and dynamics of intercellular adhesion, proliferation and migration during dissemination. Surprisingly, Twist1(+) epithelium displayed extensive intercellular junctions, and Twist1(-) luminal epithelial cells could still adhere to disseminating Twist1(+) cells. Although proteolysis and proliferation were both observed throughout dissemination, neither was absolutely required. Finally, Twist1(+) cells exhibited a hybrid migration mode; their morphology and nuclear deformation were characteristic of amoeboid cells, whereas their dynamic protrusive activity, pericellular proteolysis and migration speeds were more typical of mesenchymal cells. Our data reveal that epithelial cells can disseminate while retaining competence to adhere and proliferate.

16 citations


Cites methods from "Image Processing"

  • ...ImageJ software (Abramoff et al., 2004) and Adobe Photoshop were used to crop images, place scale bars, and adjust brightness and contrast across entire images, as needed....

    [...]

Journal ArticleDOI
TL;DR: A comparison of different methods commonly used in literature for evaluation of cartilage change under weight-bearing conditions showed that the different methods agree in their thickness distribution.
Abstract: Osteoarthritis is a degenerative disease affecting bones and cartilage especially in the human knee. In this context, cartilage thickness is an indicator for knee cartilage health. Thickness measurements are performed on medical images acquired in-vivo. Currently, there is no standard method agreed upon that defines a distance measure in articular cartilage. In this work, we present a comparison of different methods commonly used in literature. These methods are based on nearest neighbors, surface normal vectors, local thickness and potential field lines. All approaches were applied to manual segmentations of tibia and lateral and medial tibial cartilage performed by experienced raters. The underlying data were contrast agent-enhanced cone-beam C-arm CT reconstructions of one healthy subject's knee. The subject was scanned three times, once in supine position and two times in a standing weight-bearing position. A comparison of the resulting thickness maps shows similar distributions and high correlation coefficients between the approaches above 0.90. The nearest neighbor method results on average in the lowest cartilage thickness values, while the local thickness approach assigns the highest values. We showed that the different methods agree in their thickness distribution. The results will be used for a future evaluation of cartilage change under weight-bearing conditions.

16 citations

References
More filters
Journal ArticleDOI
01 Nov 1973
TL;DR: These results indicate that the easily computable textural features based on gray-tone spatial dependancies probably have a general applicability for a wide variety of image-classification applications.
Abstract: Texture is one of the important characteristics used in identifying objects or regions of interest in an image, whether the image be a photomicrograph, an aerial photograph, or a satellite image. This paper describes some easily computable textural features based on gray-tone spatial dependancies, and illustrates their application in category-identification tasks of three different kinds of image data: photomicrographs of five kinds of sandstones, 1:20 000 panchromatic aerial photographs of eight land-use categories, and Earth Resources Technology Satellite (ERTS) multispecial imagery containing seven land-use categories. We use two kinds of decision rules: one for which the decision regions are convex polyhedra (a piecewise linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89 percent for the photomicrographs, 82 percent for the aerial photographic imagery, and 83 percent for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications.

20,442 citations

Book
03 Oct 1988
TL;DR: This chapter discusses two Dimensional Systems and Mathematical Preliminaries and their applications in Image Analysis and Computer Vision, as well as image reconstruction from Projections and image enhancement.
Abstract: Introduction. 1. Two Dimensional Systems and Mathematical Preliminaries. 2. Image Perception. 3. Image Sampling and Quantization. 4. Image Transforms. 5. Image Representation by Stochastic Models. 6. Image Enhancement. 7. Image Filtering and Restoration. 8. Image Analysis and Computer Vision. 9. Image Reconstruction From Projections. 10. Image Data Compression.

8,504 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
TL;DR: Eight constructs decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent a cellular valves and intact chamber geometry that could generate pump function in a modified working heart preparation.
Abstract: About 3,000 individuals in the United States are awaiting a donor heart; worldwide, 22 million individuals are living with heart failure. A bioartificial heart is a theoretical alternative to transplantation or mechanical left ventricular support. Generating a bioartificial heart requires engineering of cardiac architecture, appropriate cellular constituents and pump function. We decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent acellular valves and intact chamber geometry. To mimic cardiac cell composition, we reseeded these constructs with cardiac or endothelial cells. To establish function, we maintained eight constructs for up to 28 d by coronary perfusion in a bioreactor that simulated cardiac physiology. By day 4, we observed macroscopic contractions. By day 8, under physiological load and electrical stimulation, constructs could generate pump function (equivalent to about 2% of adult or 25% of 16-week fetal heart function) in a modified working heart preparation.

2,454 citations

Journal ArticleDOI
01 Sep 1997
TL;DR: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment.
Abstract: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification. The motivation for this endeavor stems from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment. In particular the biomedical literature suggests that irises are as distinct as fingerprints or patterns of retinal blood vessels. Further, since the iris is an overt body, its appearance is amenable to remote examination with the aid of a machine vision system. The body of this paper details issues in the design and operation of such systems. For the sake of illustration, extant systems are described in some amount of detail.

2,046 citations