scispace - formally typeset
Search or ask a question
Proceedings Article

Image Processing

01 Jan 1994-
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.
Citations
More filters
Journal ArticleDOI
TL;DR: A complexity analysis of the proposed block Hough transform algorithm sets constraints on the complexity of algorithms used for block decomposition, so that the total decomposition and Hough Transform application time is much less than the time consumed by the usual point Houghtransform.

36 citations


Cites background from "Image Processing"

  • ...(d) Set I(x, y)"0 ∀ x3[XF[1]2X ̧[1]]' y3[1⁄2F[1]....

    [...]

  • ...The description of a digital image in terms of simple geometrical shapes is a well established methodology that often proves useful for effective image segmentation [1]....

    [...]

  • ...Sobel, Prewitt) to the edge image, this span is reduced to a total of n/4 [1]....

    [...]

  • ...(c) Set XF[1]"x 0 , X ̧[1]"x op , 1⁄2F[1]"y 0 , 1⁄2 ̧[1]"y op ....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors used ultrasound imaging to explore in vivo the arms of the cephalopod mollusc Octopus vulgaris and measured the dimensions of the arm and its internal structures such as muscle bundles and neural components.
Abstract: SUMMARY Octopus arms are extremely dexterous structures. The special arrangements of the muscle fibers and nerve cord allow a rich variety of complex and fine movements under neural control. Historically, the arm structure has been investigated using traditional comparative morphological ex vivo analysis. Here, we employed ultrasound imaging, for the first time, to explore in vivo the arms of the cephalopod mollusc Octopus vulgaris . Sonographic examination (linear transducer, 18 MHz) was carried out in anesthetized animals along the three anatomical planes: transverse, sagittal and horizontal. Images of the arm were comparable to the corresponding histological sections. We were able, in a non-invasive way, to measure the dimensions of the arm and its internal structures such as muscle bundles and neural components. In addition, we evaluated echo intensity signals as an expression of the difference in the muscular organization of the tissues examined (i.e. transverse versus longitudinal muscles), finding different reflectivity based on different arrangements of fibers and their intimate relationship with other tissues. In contrast to classical preparative procedures, ultrasound imaging can provide rapid, destruction-free access to morphological data from numerous specimens, thus extending the range of techniques available for comparative studies of invertebrate morphology.

36 citations


Cites methods from "Image Processing"

  • ...Image processing with ImageJ....

    [...]

  • ...Measurement of echo intensity The mean echo intensity was determined following the method of Scholten and colleagues (Scholten et al., 2003) (see also Pillen et al., 2009) in selected areas of interest, using ImageJ (Abramoff et al., 2004)....

    [...]

  • ...The results of mean muscle echo intensity together with area and minimum and maximum values for each region are shown using ImageJ analysis tool....

    [...]

Proceedings ArticleDOI
04 Apr 2018
TL;DR: In this paper, a weakly supervised learning method was proposed for lung CT image segmentation, where segmentation generated in previous steps (first by unsupervised segmentation then by neural networks) is used as ground truth for the next level of network learning.
Abstract: Image segmentation is a fundamental problem in medical image analysis. In recent years, deep neural networks achieve impressive performances on many medical image segmentation tasks by supervised learning on large manually annotated data. However, expert annotations on big medical datasets are tedious, expensive or sometimes unavailable. Weakly supervised learning could reduce the effort for annotation but still required certain amounts of expertise. Recently, deep learning shows a potential to produce more accurate predictions than the original erroneous labels. Inspired by this, we introduce a very weakly supervised learning method, for cystic lesion detection and segmentation in lung CT images, without any manual annotation. Our method works in a self-learning manner, where segmentation generated in previous steps (first by unsupervised segmentation then by neural networks) is used as ground truth for the next level of network learning. Experiments on a cystic lung lesion dataset show that the deep learning could perform better than the initial unsupervised annotation, and progressively improve itself after self-learning.

36 citations

Proceedings ArticleDOI
21 Apr 1997
TL;DR: Two novel feature extraction techniques for rotation invariant texture classification are presented, using the wavelet transform and Gaussian Markov random field modelling, to give a consistently high performance for rotated textures in the presence of noise.
Abstract: The importance of texture analysis and classification in image processing is well known. However, many existing texture classification schemes suffer from a number of drawbacks. A large number of features are commonly used to represent each texture and an excessively large image area is often required for the texture analysis, both leading to high computational complexity. Furthermore, most existing schemes are highly orientation dependent and thus cannot correctly classify textures after rotation. In this paper, two novel feature extraction techniques for rotation invariant texture classification are presented. These schemes, using the wavelet transform and Gaussian Markov random field modelling, are shown to give a consistently high performance for rotated textures in the presence of noise. Moreover, they use just four features to represent each texture and require only a 16/spl times/16 image area for their analysis leading to a significantly lower computational complexity than most existing schemes.

36 citations

Journal ArticleDOI
19 Jan 2016-PLOS ONE
TL;DR: Caudate nuclei volume and prefrontal fractional anisotropy, not frontal gray matter thickness, showed unique and combined significance for processing speed in Parkinson’s disease.
Abstract: Objective This prospective investigation examined: 1) processing speed and working memory relative to other cognitive domains in non-demented medically managed idiopathic Parkinson’s disease, and 2) the predictive role of cortical/subcortical gray thickness/volume and white matter fractional anisotropy on processing speed and working memory. Methods Participants completed a neuropsychological protocol, Unified Parkinson’s Disease Rating Scale, brain MRI, and fasting blood draw to rule out vascular contributors. Within group a priori anatomical contributors included bilateral frontal thickness, caudate nuclei volume, and prefrontal white matter fractional anisotropy. Results Idiopathic Parkinson’s disease (n = 40; Hoehn & Yahr stages 1–3) and non-Parkinson’s disease ‘control’ peers (n = 40) matched on demographics, general cognition, comorbidity, and imaging/blood vascular metrics. Cognitively, individuals with Parkinson’s disease were significantly more impaired than controls on tests of processing speed, secondary deficits on working memory, with subtle impairments in memory, abstract reasoning, and visuoperceptual/spatial abilities. Anatomically, Parkinson’s disease individuals were not statistically different in cortical gray thickness or subcortical gray volumes with the exception of the putamen. Tract Based Spatial Statistics showed reduced prefrontal fractional anisotropy for Parkinson’s disease relative to controls. Within Parkinson’s disease, prefrontal fractional anisotropy and caudate nucleus volume partially explained processing speed. For controls, only prefrontal white matter was a significant contributor to processing speed. There were no significant anatomical predictors of working memory for either group. Conclusions Caudate nuclei volume and prefrontal fractional anisotropy, not frontal gray matter thickness, showed unique and combined significance for processing speed in Parkinson’s disease. Findings underscore the relevance for examining gray-white matter interactions and also highlight clinical processing speed metrics as potential indicators of early cognitive impairment in PD.

36 citations

References
More filters
Journal ArticleDOI
01 Nov 1973
TL;DR: These results indicate that the easily computable textural features based on gray-tone spatial dependancies probably have a general applicability for a wide variety of image-classification applications.
Abstract: Texture is one of the important characteristics used in identifying objects or regions of interest in an image, whether the image be a photomicrograph, an aerial photograph, or a satellite image. This paper describes some easily computable textural features based on gray-tone spatial dependancies, and illustrates their application in category-identification tasks of three different kinds of image data: photomicrographs of five kinds of sandstones, 1:20 000 panchromatic aerial photographs of eight land-use categories, and Earth Resources Technology Satellite (ERTS) multispecial imagery containing seven land-use categories. We use two kinds of decision rules: one for which the decision regions are convex polyhedra (a piecewise linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89 percent for the photomicrographs, 82 percent for the aerial photographic imagery, and 83 percent for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications.

20,442 citations

Book
03 Oct 1988
TL;DR: This chapter discusses two Dimensional Systems and Mathematical Preliminaries and their applications in Image Analysis and Computer Vision, as well as image reconstruction from Projections and image enhancement.
Abstract: Introduction. 1. Two Dimensional Systems and Mathematical Preliminaries. 2. Image Perception. 3. Image Sampling and Quantization. 4. Image Transforms. 5. Image Representation by Stochastic Models. 6. Image Enhancement. 7. Image Filtering and Restoration. 8. Image Analysis and Computer Vision. 9. Image Reconstruction From Projections. 10. Image Data Compression.

8,504 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
TL;DR: Eight constructs decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent a cellular valves and intact chamber geometry that could generate pump function in a modified working heart preparation.
Abstract: About 3,000 individuals in the United States are awaiting a donor heart; worldwide, 22 million individuals are living with heart failure. A bioartificial heart is a theoretical alternative to transplantation or mechanical left ventricular support. Generating a bioartificial heart requires engineering of cardiac architecture, appropriate cellular constituents and pump function. We decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent acellular valves and intact chamber geometry. To mimic cardiac cell composition, we reseeded these constructs with cardiac or endothelial cells. To establish function, we maintained eight constructs for up to 28 d by coronary perfusion in a bioreactor that simulated cardiac physiology. By day 4, we observed macroscopic contractions. By day 8, under physiological load and electrical stimulation, constructs could generate pump function (equivalent to about 2% of adult or 25% of 16-week fetal heart function) in a modified working heart preparation.

2,454 citations

Journal ArticleDOI
01 Sep 1997
TL;DR: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment.
Abstract: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification. The motivation for this endeavor stems from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment. In particular the biomedical literature suggests that irises are as distinct as fingerprints or patterns of retinal blood vessels. Further, since the iris is an overt body, its appearance is amenable to remote examination with the aid of a machine vision system. The body of this paper details issues in the design and operation of such systems. For the sake of illustration, extant systems are described in some amount of detail.

2,046 citations