scispace - formally typeset
Search or ask a question
Proceedings Article

Image Processing

01 Jan 1994-
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.
Citations
More filters
Book
03 Oct 1988
TL;DR: This chapter discusses two Dimensional Systems and Mathematical Preliminaries and their applications in Image Analysis and Computer Vision, as well as image reconstruction from Projections and image enhancement.
Abstract: Introduction. 1. Two Dimensional Systems and Mathematical Preliminaries. 2. Image Perception. 3. Image Sampling and Quantization. 4. Image Transforms. 5. Image Representation by Stochastic Models. 6. Image Enhancement. 7. Image Filtering and Restoration. 8. Image Analysis and Computer Vision. 9. Image Reconstruction From Projections. 10. Image Data Compression.

8,504 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
TL;DR: Eight constructs decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent a cellular valves and intact chamber geometry that could generate pump function in a modified working heart preparation.
Abstract: About 3,000 individuals in the United States are awaiting a donor heart; worldwide, 22 million individuals are living with heart failure. A bioartificial heart is a theoretical alternative to transplantation or mechanical left ventricular support. Generating a bioartificial heart requires engineering of cardiac architecture, appropriate cellular constituents and pump function. We decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent acellular valves and intact chamber geometry. To mimic cardiac cell composition, we reseeded these constructs with cardiac or endothelial cells. To establish function, we maintained eight constructs for up to 28 d by coronary perfusion in a bioreactor that simulated cardiac physiology. By day 4, we observed macroscopic contractions. By day 8, under physiological load and electrical stimulation, constructs could generate pump function (equivalent to about 2% of adult or 25% of 16-week fetal heart function) in a modified working heart preparation.

2,454 citations

Journal ArticleDOI
01 Sep 1997
TL;DR: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment.
Abstract: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification. The motivation for this endeavor stems from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment. In particular the biomedical literature suggests that irises are as distinct as fingerprints or patterns of retinal blood vessels. Further, since the iris is an overt body, its appearance is amenable to remote examination with the aid of a machine vision system. The body of this paper details issues in the design and operation of such systems. For the sake of illustration, extant systems are described in some amount of detail.

2,046 citations


Cites methods from "Image Processing"

  • ...system makes us of an isotropic bandpass decomposition derived from application of Laplacian of Gaussian filters [25], [29] to the image data....

    [...]

  • ...In practice, the filtered image is realized as a Laplacian pyramid [8], [29]....

    [...]

Journal ArticleDOI
TL;DR: This paper identifies some promising techniques for image retrieval according to standard principles and examines implementation procedures for each technique and discusses its advantages and disadvantages.

1,910 citations


Cites background or methods from "Image Processing"

  • ...Structural description of chromosome shape (reprinted from [14])....

    [...]

  • ...Common invariants include (i) geometric invariants such as cross-ratio, length ratio, distance ratio, angle, area [69], triangle [70], invariants from coplanar points [14]; (ii) algebraic invariants such as determinant, eigenvalues [71], trace [14]; (iii) di<erential invariants such as curvature, torsion and Gaussian curvature....

    [...]

  • ...Designers of shape invariants argue that although most of other shape representation techniques are invariant under similarity transformations (rotation, translation and scaling), they depend on viewpoint [14]....

    [...]

  • ...The extracting of the convex hull can use both boundary tracing method [14] and morphological methods [11,15]....

    [...]

  • ...Assuming the shape boundary has been represented as a shape signature z(i), the rth moment mr and central moment r can be estimated as [14]...

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this article, the authors investigated cerebellar volumes derived from volumetric magnetic resonance imaging of 37 first-episode patients with schizophrenia, schizophreniform or schizoaffective disorder and 18 healthy controls matched for age, gender and handedness.
Abstract: Recent studies indicate that morphological and functional abnormalities of the cerebellum are associated with schizophrenia. Since the cerebellum is crucial for motor coordination, one may ask whether the respective changes are associated with motor dysfunction in the disease. To test these hypotheses in a clinical study, we investigated cerebellar volumes derived from volumetric magnetic resonance imaging of 37 first-episode patients with schizophrenia, schizophreniform or schizoaffective disorder and 18 healthy controls matched for age, gender and handedness. To control for potential interindividual differences in head size, intracranial volume was entered as a covariate. Neurological soft signs (NSS) were examined after remission of acute symptoms. Compared with the controls, patients had significantly smaller cerebellar volumes for both hemispheres. Furthermore, NSS in patients were inversely correlated with tissue volume of the right cerebellar hemisphere partialling for intracranial volume. No associations were detected between cerebellar volumes and psychopathological measures obtained at hospital admission when patients were in the acute psychotic state or after remission, treatment duration until remission, treatment response or prognostic factors, respectively. These findings support the hypothesis of cerebellar involvement in schizophrenia and indicate that the respective changes are associated with NSS.

159 citations

Journal ArticleDOI
TL;DR: The data suggest that pharmacological chaperoning of nAChRs by nicotine can alter the physiology of ER processes and generate a β2enhanced-ER-export mutant subunit that mimics two regions of the β4 subunit sequence: the presence of an ER export motif and the absence of anER retention/retrieval motif.
Abstract: The up-regulation of α4β2* nicotinic acetylcholine receptors (nAChRs) by chronic nicotine is a cell-delimited process and may be necessary and sufficient for the initial events of nicotine dependence. Clinical literature documents an inverse relationship between a person’s history of tobacco use and his or her susceptibility to Parkinson’s disease; this may also result from up-regulation. This study visualizes and quantifies the subcellular mechanisms involved in nicotine-induced nAChR up-regulation by using transfected fluorescent protein (FP)-tagged α4 nAChR subunits and an FP-tagged Sec24D endoplasmic reticulum (ER) exit site marker. Total internal reflection fluorescence microscopy shows that nicotine (0.1 µM for 48 h) up-regulates α4β2 nAChRs at the plasma membrane (PM), despite increasing the fraction of α4β2 nAChRs that remain in near-PM ER. Pixel-resolved normalized Forster resonance energy transfer microscopy between α4-FP subunits shows that nicotine stabilizes the (α4)2(β2)3 stoichiometry before the nAChRs reach the trans-Golgi apparatus. Nicotine also induces the formation of additional ER exit sites (ERES). To aid in the mechanistic analysis of these phenomena, we generated a β2enhanced-ER-export mutant subunit that mimics two regions of the β4 subunit sequence: the presence of an ER export motif and the absence of an ER retention/retrieval motif. The α4β2enhanced-ER-export nAChR resembles nicotine-exposed nAChRs with regard to stoichiometry, intracellular mobility, ERES enhancement, and PM localization. Nicotine produces only small additional PM up-regulation of α4β2enhanced-ER-export receptors. The experimental data are simulated with a model incorporating two mechanisms: (1) nicotine acts as a stabilizing pharmacological chaperone for nascent α4β2 nAChRs in the ER, eventually increasing PM receptors despite a bottleneck(s) in ER export; and (2) removal of the bottleneck (e.g., by expression of the β2enhanced-ER-export subunit) is sufficient to increase PM nAChR numbers, even without nicotine. The data also suggest that pharmacological chaperoning of nAChRs by nicotine can alter the physiology of ER processes.

158 citations

Journal ArticleDOI
TL;DR: Fractal analysis provides a quantitative measure of trabeculation and has high reproducibility and accuracy for LVNC diagnosis when compared to current CMR criteria.
Abstract: Left ventricular noncompaction (LVNC) is a myocardial disorder characterized by excessive left ventricular (LV) trabeculae. Current methods for quantification of LV trabeculae have limitations. The aim of this study is to describe a novel technique for quantifying LV trabeculation using cardiovascular magnetic resonance (CMR) and fractal geometry. Observing that trabeculae appear complex and irregular, we hypothesize that measuring the fractal dimension (FD) of the endocardial border provides a quantitative parameter that can be used to distinguish normal from abnormal trabecular patterns. Fractal analysis is a method of quantifying complex geometric patterns in biological structures. The resulting FD is a unitless measure index of how completely the object fills space. FD increases with increased structural complexity. LV FD was measured using a box-counting method on CMR short-axis cine stacks. Three groups were studied: LVNC (defined by Jenni criteria), n=30(age 41±13; men, 16); healthy whites, n=75(age, 46±16; men, 36); healthy blacks, n=30(age, 40±11; men, 15). In healthy volunteers FD varied in a characteristic pattern from base to apex along the LV. This pattern was altered in LVNC where apical FD were abnormally elevated. In healthy volunteers, blacks had higher FD than whites in the apical third of the LV (maximal apical FD: 1.253±0.005 vs. 1.235±0.004, p<0.01) (mean±s.e.m.). Comparing LVNC with healthy volunteers, maximal apical FD was higher in LVNC (1.392±0.010, p<0.00001). The fractal method was more accurate and reproducible (ICC, 0.97 and 0.96 for intra and inter-observer readings) than two other CMR criteria for LVNC (Petersen and Jacquier). FD is higher in LVNC patients compared to healthy volunteers and is higher in healthy blacks than in whites. Fractal analysis provides a quantitative measure of trabeculation and has high reproducibility and accuracy for LVNC diagnosis when compared to current CMR criteria.

157 citations

Journal ArticleDOI
TL;DR: A new convexity measure for planar regions bounded by polygons is defined and evaluated and it is found that it is more sensitive to measured boundary defects than the so called "area-based" conveXity measures.
Abstract: Convexity estimators are commonly used in the analysis of shape. In this paper, we define and evaluate a new convexity measure for planar regions bounded by polygons. The new convexity measure can be understood as a "boundary-based" measure and in accordance with this it is more sensitive to measured boundary defects than the so called "area-based" convexity measures. When compared with the convexity measure defined as the ratio between the Euclidean perimeter of the convex hull of the measured shape and the Euclidean perimeter of the measured shape then the new convexity measure also shows some advantages-particularly for shapes with holes. The new convexity measure has the following desirable properties: 1) the estimated convexity is always a number from (0, 1], 2) the estimated convexity is I if and only if the measured shape is convex, 3) there are shapes whose estimated convexity is arbitrarily close to 0, 4) the new convexity measure is invariant under similarity transformations, and 5) there is a simple and fast procedure for computing the new convexity measure.

156 citations

Journal ArticleDOI
TL;DR: The experimental results show that the proposed operator for color image compression can have output picture quality acceptable to human eyes, and the proposed edge operator can detect the color edge at the subpixel level.
Abstract: This paper presents a new moment-preserving thresholding technique, called the binary quaternion-moment-preserving (BQMP) thresholding, for color image data. Based on representing color data by the quaternions, the statistical parameters of color data can be expressed through the definition of quaternion moments. Analytical formulas of the BQMP thresholding can thus be determined by using the algebra of the quaternions. The computation time for the BQMP thresholding is of order of the data size. By using the BQMP thresholding, quaternion-moment-based operators are designed for the application of color image processing, such as color image compression, multiclass clustering of color data, and subpixel color edge detection. The experimental results show that the proposed operator for color image compression can have output picture quality acceptable to human eyes. In addition, the proposed edge operator can detect the color edge at the subpixel level. Therefore, the proposed BQMP thresholding can be used as a tool for color image processing.

155 citations