scispace - formally typeset
Search or ask a question
Proceedings Article

Image Processing

01 Jan 1994-
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article, the authors compare three different ways to incorporate prior information for electrical resistivity tomography (ERT): using a simple reference model, adding structural constraints to Occam's inversion and using geostatistical constraints.
Abstract: Many geophysical inverse problems are ill-posed and their solution non-unique. It is thus important to reduce the amount of mathematical solutions to more geologically plausible models by regularizing the inverse problem and incorporating all available prior information in the inversion process. We compare three different ways to incorporate prior information for electrical resistivity tomography (ERT): using a simple reference model, adding structural constraints to Occam’s inversion and using geostatistical constraints. We made the comparison on four real cases representing different field applications in terms of scales of investigation and level of heterogeneities. In those cases, when electromagnetic logging data are available in boreholes to control the solution, it appears that incorporating prior information clearly improves the correspondence with logging data compared to the standard smoothness constraint. However, the way to incorporate it may have a major impact on the solution. A reference model can often be used to constrain the inversion; however, it can lead to misinterpretation if its weight is too strong or the resistivity values inappropriate. When the computation of the vertical and/or horizontal correlation length is possible, the geostatistical inversion gives reliable results everywhere in the section. However, adding geostatistical constraints can be difficult when there is not enough data to compute correlation lengths. When a known limit between two layers exists, the use of structural constraint seems to be more indicated particularly when the limit is located in zones of low sensitivity for ERT. This work should help interpreters to include their prior information directly into the inversion process through an appropriate way.

49 citations

Journal ArticleDOI
TL;DR: An automated system for planning and optimization of lumber production using Machine Vision and Computed Tomography and a prototype implementation shows significant gains in value yield recovery when compared with lumber processing strategies that use only the information derived from the external log structure.
Abstract: An automated system for planning and optimization of lumber production using Machine Vision and Computed Tomography (CT) is proposed. Cross-sectional CT images of hardwood logs are analyzed using machine vision algorithms. Internal defects in the hardwood logs pockets are identified and localized. A virtual in silico 3-D reconstruction of the hardwood log and its internal defects is generated using Kalman filter-based tracking algorithms. Various sawing operations are simulated on the virtual 3-D reconstruction of the log and the resulting virtual lumber products automatically graded using rules stipulated by the National Hardwood Lumber Association (NHLA). Knowledge of the internal log defects is suitably exploited to formulate sawing strategies that optimize the value yield recovery of the resulting lumber products. A prototype implementation shows significant gains in value yield recovery when compared with lumber processing strategies that use only the information derived from the external log structure.

49 citations


Cites background or methods from "Image Processing"

  • ...Consequently, a morphological dilation operation [30] is used to recover the knot boundaries as shown in Figure 3(c)....

    [...]

  • ...in shape, false holes, typically caused by small cracks or grayscale valleys between successive rings, are removed by using a combination of morphological erosion and dilation operations on the thresholded result [30]....

    [...]

Journal ArticleDOI
TL;DR: The streamlined and easy-to-use retrospective image-based movement correction method presented in this work significantly improves the image quality and the measured tracer kinetics of 18F-FDDNP PET images.
Abstract: Head movement during a PET scan (especially a dynamic scan) can affect both the qualitative and the quantitative aspects of an image, making it difficult to accurately interpret the results. The primary objective of this study was to develop a retrospective image-based movement correction (MC) method and evaluate its implementation on dynamic 2-(1-{6-[(2-18F-fluoroethyl)(methyl)amino]-2-naphthyl}ethylidene)malononitrile (18F-FDDNP) PET images of cognitively intact controls and patients with Alzheimer9s disease (AD). Methods: Dynamic 18F-FDDNP PET images, used for in vivo imaging of β-amyloid plaques and neurofibrillary tangles, were obtained from 12 AD patients and 9 age-matched controls. For each study, a transmission scan was first acquired for attenuation correction. An accurate retrospective MC method that corrected for transmission–emission and emission–emission misalignments was applied to all studies. No restriction was assumed for zero movement between the transmission scan and the first emission scan. Logan analysis, with the cerebellum as the reference region, was used to estimate various regional distribution volume ratio (DVR) values in the brain before and after MC. Discriminant analysis was used to build a predictive model for group membership, using data with and without MC. Results: MC improved the image quality and quantitative values in 18F-FDDNP PET images. In this subject population, no significant difference in DVR value was observed in the medial temporal (MTL) region of controls and patients with AD before MC. However, after MC, significant differences in DVR values in the frontal, parietal, posterior cingulate, MTL, lateral temporal (LTL), and global regions were seen between the 2 groups (P

49 citations

01 Jan 2007
TL;DR: This work reviews particle filtering techniques for tracking single and multiple moving objects in video sequences, by using different features such as colour, shape, motion, edge and sound, along with pros and cons.
Abstract: Object tracking in video sequences is a challenging task and has various applications We review particle filtering techniques for tracking single and multiple moving objects in video sequences, by using different features such as colour, shape, motion, edge and sound Pros and cons of these algorithms are discussed along with difficulties that have to be overcome Results of a particular particle filter with colour and texture cues are reported Conclusions and open research issues are formulated

49 citations


Cites background or methods from "Image Processing"

  • ...HIS (hue, saturation, intensity) representation [11] can also be used [12]....

    [...]

  • ...An edge [11] is a property attached to an individual pixel and is calculated from the image function behaviour in a neighbourhood of that pixel....

    [...]

Journal ArticleDOI
TL;DR: The data suggest that KDP-1 is a novel KASH protein that functions to ensure the timely progression of the cell cycle between the end of S phase and the entry into mitosis.
Abstract: Klarsicht, ANC-1 and Syne homology (KASH) proteins localize to the outer nuclear membrane where they connect the nucleus to the cytoskeleton. KASH proteins interact with Sad1-UNC-84 (SUN) proteins to transfer forces across the nuclear envelope to position nuclei or move chromosomes. A new KASH protein, KDP-1, was identified in a membrane yeast two-hybrid screen of a Caenorhabditis elegans library using the SUN protein UNC-84 as bait. KDP-1 also interacted with SUN-1. KDP-1 was enriched at the nuclear envelope in a variety of tissues and required SUN-1 for nuclear envelope localization in the germline. Genetic analyses showed that kdp-1 was essential for embryonic viability, larval growth and germline development. kdp-1(RNAi) delayed the entry into mitosis in embryos, led to a small mitotic zone in the germline, and caused an endomitotic phenotype. Aspects of these phenotypes were similar to those seen in sun-1(RNAi), suggesting that KDP-1 functions with SUN-1 in the germline and early embryo. The data suggest that KDP-1 is a novel KASH protein that functions to ensure the timely progression of the cell cycle between the end of S phase and the entry into mitosis.

49 citations


Cites methods from "Image Processing"

  • ...Deconvolution was performed in ImageJ using the Iterative Deconvolve 3D plugin (Optinav)....

    [...]

  • ...Movies were processed in Final Cut Express (Apple Computer) and ImageJ....

    [...]

  • ...Kymographs were made using the ImageJ Kymograph or reslice plugin....

    [...]

  • ...Images were uniformly manipulated using the remove background and levels controls in ImageJ (Abramoff et al., 2004)....

    [...]

References
More filters
Journal ArticleDOI
01 Nov 1973
TL;DR: These results indicate that the easily computable textural features based on gray-tone spatial dependancies probably have a general applicability for a wide variety of image-classification applications.
Abstract: Texture is one of the important characteristics used in identifying objects or regions of interest in an image, whether the image be a photomicrograph, an aerial photograph, or a satellite image. This paper describes some easily computable textural features based on gray-tone spatial dependancies, and illustrates their application in category-identification tasks of three different kinds of image data: photomicrographs of five kinds of sandstones, 1:20 000 panchromatic aerial photographs of eight land-use categories, and Earth Resources Technology Satellite (ERTS) multispecial imagery containing seven land-use categories. We use two kinds of decision rules: one for which the decision regions are convex polyhedra (a piecewise linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89 percent for the photomicrographs, 82 percent for the aerial photographic imagery, and 83 percent for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications.

20,442 citations

Book
03 Oct 1988
TL;DR: This chapter discusses two Dimensional Systems and Mathematical Preliminaries and their applications in Image Analysis and Computer Vision, as well as image reconstruction from Projections and image enhancement.
Abstract: Introduction. 1. Two Dimensional Systems and Mathematical Preliminaries. 2. Image Perception. 3. Image Sampling and Quantization. 4. Image Transforms. 5. Image Representation by Stochastic Models. 6. Image Enhancement. 7. Image Filtering and Restoration. 8. Image Analysis and Computer Vision. 9. Image Reconstruction From Projections. 10. Image Data Compression.

8,504 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
TL;DR: Eight constructs decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent a cellular valves and intact chamber geometry that could generate pump function in a modified working heart preparation.
Abstract: About 3,000 individuals in the United States are awaiting a donor heart; worldwide, 22 million individuals are living with heart failure. A bioartificial heart is a theoretical alternative to transplantation or mechanical left ventricular support. Generating a bioartificial heart requires engineering of cardiac architecture, appropriate cellular constituents and pump function. We decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent acellular valves and intact chamber geometry. To mimic cardiac cell composition, we reseeded these constructs with cardiac or endothelial cells. To establish function, we maintained eight constructs for up to 28 d by coronary perfusion in a bioreactor that simulated cardiac physiology. By day 4, we observed macroscopic contractions. By day 8, under physiological load and electrical stimulation, constructs could generate pump function (equivalent to about 2% of adult or 25% of 16-week fetal heart function) in a modified working heart preparation.

2,454 citations

Journal ArticleDOI
01 Sep 1997
TL;DR: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment.
Abstract: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification. The motivation for this endeavor stems from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment. In particular the biomedical literature suggests that irises are as distinct as fingerprints or patterns of retinal blood vessels. Further, since the iris is an overt body, its appearance is amenable to remote examination with the aid of a machine vision system. The body of this paper details issues in the design and operation of such systems. For the sake of illustration, extant systems are described in some amount of detail.

2,046 citations