scispace - formally typeset
Search or ask a question
Proceedings Article

Image Processing

01 Jan 1994-
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.
Citations
More filters
Journal ArticleDOI
Paul L. Rosin1
01 Jul 2003
TL;DR: Several algorithms for calculating ellipticity, rectangularity, and triangularity shape descriptors are described and evaluated by testing on both synthetic and real data.
Abstract: Object classification often operates by making decisions based on the values of several shape properties measured from an image of the object. This paper describes several algorithms (both old and new) for calculating ellipticity, rectangularity, and triangularity shape descriptors. The methods are evaluated by testing on both synthetic and real data.

210 citations


Cites background or methods from "Image Processing"

  • ...In addition to the ellipticity, rectangularity, and triangularity measures tw o sets of moment invariants were considered (those invariant to similarity transforms as well as affine invariants), and the standard shape descriptors of eccentricity, circularity, compactness, and convexity [10]....

    [...]

  • ...Given these difficulties, a popular approach is to design shape descriptors sensitive to specific aspects of shape suc h as eccentricity, Euler number, compactness, and convexity [10]....

    [...]

Journal ArticleDOI
TL;DR: Results indicate that vigorous wrist motion is a useful indicator for identifying the boundaries of eating activities, and that the method should prove useful in the continued development of body-worn sensor tools for monitoring energy intake.
Abstract: This paper is motivated by the growing prevalence of obesity, a health problem affecting over 500 million people. Measurements of energy intake are commonly used for the study and treatment of obesity. However, the most widely used tools rely upon self-report and require a considerable manual effort, leading to underreporting of consumption, noncompliance, and discontinued use over the long term. The purpose of this paper is to describe a new method that uses a watch-like configuration of sensors to continuously track wrist motion throughout the day and automatically detect periods of eating. Our method uses the novel idea that meals tend to be preceded and succeeded by the periods of vigorous wrist motion. We describe an algorithm that segments and classifies such periods as eating or noneating activities. We also evaluate our method on a large dataset (43 subjects, 449 total h of data, containing 116 periods of eating) collected during free-living. Our results show an accuracy of 81% for detecting eating at 1-s resolution in comparison to manually marked event logs of periods eating. These results indicate that vigorous wrist motion is a useful indicator for identifying the boundaries of eating activities, and that our method should prove useful in the continued development of body-worn sensor tools for monitoring energy intake.

207 citations


Cites methods from "Image Processing"

  • ...detector using the concept of a hysteresis threshold [32]....

    [...]

Journal ArticleDOI
TL;DR: CT perfusion is a functional imaging technique that provides important information about capillary-level hemodynamics of the brain parenchyma and is a natural complement to the strengths of unenhanced CT and CT angiography in the evaluation of acute stroke, vasospasm, and other neurovascular disorders.
Abstract: CT perfusion (CTP) is a functional imaging technique that provides important information about capillary-level hemodynamics of the brain parenchyma and is a natural complement to the strengths of unenhanced CT and CT angiography in the evaluation of acute stroke, vasospasm, and other neurovascular disorders. CTP is critical in determining the extent of irreversibly infarcted brain tissue (infarct "core") and the severely ischemic but potentially salvageable tissue ("penumbra"). This is achieved by generating parametric maps of cerebral blood flow, cerebral blood volume, and mean transit time.

203 citations


Cites background from "Image Processing"

  • ...Peripheral blood vessels and perforating arteries should be excluded from CTP maps because they may mimic areas of falsely high perfusion within brain tissue.(22) Kudo et al (2003)(24) evaluated the efficacy of vascular pixel elimination in CTP imaging in comparison with positron-emission tomography (PET)....

    [...]

  • ...Gray matter typically measures 30 – 40 HU and white matter, 20 –30 HU; removal of pixels 0 HU or 60 – 80 HU effectively eliminates bone, fat, and air from unenhanced CT images.(22) Alternatively, thresholds can be based on the actual parametric map values....

    [...]

01 Jan 2012
TL;DR: In this article, the authors proposed an approach to regulate mammalian autophagy regulation by using micro-RNAs and RNA-induced silencing complexes (RISC), which is based on the PtdIns(3)P 3-phosphate, MAP1LC3, LC3, microtubule-associated protein 1light chain 3.
Abstract: Keywords: macroautophagy, mammalian autophagy regulation, microRNA, hsa-miR-376b, BECN1, Beclin 1, ATG4C, drug researchAbbreviations: miRNA, microRNA; PtdIns(3)P, phosphatidylinositol 3-phosphate; MAP1LC3, LC3, microtubule-associated protein 1light chain 3; RISC, RNA-induced silencing complexes; miR-376b, hsa-miR-376b; MRE, miRNA responsive element

200 citations

Journal ArticleDOI
TL;DR: It is found that the high thermocapillary force, induced by the high temperature gradient in the laser interaction region, can rapidly eliminate pores from the melt pool during the LPBF process.
Abstract: Laser powder bed fusion (LPBF) is a 3D printing technology that can print metal parts with complex geometries without the design constraints of traditional manufacturing routes. However, the parts printed by LPBF normally contain many more pores than those made by conventional methods, which severely deteriorates their properties. Here, by combining in-situ high-speed high-resolution synchrotron x-ray imaging experiments and multi-physics modeling, we unveil the dynamics and mechanisms of pore motion and elimination in the LPBF process. We find that the high thermocapillary force, induced by the high temperature gradient in the laser interaction region, can rapidly eliminate pores from the melt pool during the LPBF process. The thermocapillary force driven pore elimination mechanism revealed here may guide the development of 3D printing approaches to achieve pore-free 3D printing of metals.

200 citations

References
More filters
Journal ArticleDOI
01 Nov 1973
TL;DR: These results indicate that the easily computable textural features based on gray-tone spatial dependancies probably have a general applicability for a wide variety of image-classification applications.
Abstract: Texture is one of the important characteristics used in identifying objects or regions of interest in an image, whether the image be a photomicrograph, an aerial photograph, or a satellite image. This paper describes some easily computable textural features based on gray-tone spatial dependancies, and illustrates their application in category-identification tasks of three different kinds of image data: photomicrographs of five kinds of sandstones, 1:20 000 panchromatic aerial photographs of eight land-use categories, and Earth Resources Technology Satellite (ERTS) multispecial imagery containing seven land-use categories. We use two kinds of decision rules: one for which the decision regions are convex polyhedra (a piecewise linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89 percent for the photomicrographs, 82 percent for the aerial photographic imagery, and 83 percent for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications.

20,442 citations

Book
03 Oct 1988
TL;DR: This chapter discusses two Dimensional Systems and Mathematical Preliminaries and their applications in Image Analysis and Computer Vision, as well as image reconstruction from Projections and image enhancement.
Abstract: Introduction. 1. Two Dimensional Systems and Mathematical Preliminaries. 2. Image Perception. 3. Image Sampling and Quantization. 4. Image Transforms. 5. Image Representation by Stochastic Models. 6. Image Enhancement. 7. Image Filtering and Restoration. 8. Image Analysis and Computer Vision. 9. Image Reconstruction From Projections. 10. Image Data Compression.

8,504 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
TL;DR: Eight constructs decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent a cellular valves and intact chamber geometry that could generate pump function in a modified working heart preparation.
Abstract: About 3,000 individuals in the United States are awaiting a donor heart; worldwide, 22 million individuals are living with heart failure. A bioartificial heart is a theoretical alternative to transplantation or mechanical left ventricular support. Generating a bioartificial heart requires engineering of cardiac architecture, appropriate cellular constituents and pump function. We decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent acellular valves and intact chamber geometry. To mimic cardiac cell composition, we reseeded these constructs with cardiac or endothelial cells. To establish function, we maintained eight constructs for up to 28 d by coronary perfusion in a bioreactor that simulated cardiac physiology. By day 4, we observed macroscopic contractions. By day 8, under physiological load and electrical stimulation, constructs could generate pump function (equivalent to about 2% of adult or 25% of 16-week fetal heart function) in a modified working heart preparation.

2,454 citations

Journal ArticleDOI
01 Sep 1997
TL;DR: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment.
Abstract: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification. The motivation for this endeavor stems from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment. In particular the biomedical literature suggests that irises are as distinct as fingerprints or patterns of retinal blood vessels. Further, since the iris is an overt body, its appearance is amenable to remote examination with the aid of a machine vision system. The body of this paper details issues in the design and operation of such systems. For the sake of illustration, extant systems are described in some amount of detail.

2,046 citations