scispace - formally typeset
Search or ask a question
Proceedings Article

Image Processing

01 Jan 1994-
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.
Citations
More filters
Book
03 Oct 1988
TL;DR: This chapter discusses two Dimensional Systems and Mathematical Preliminaries and their applications in Image Analysis and Computer Vision, as well as image reconstruction from Projections and image enhancement.
Abstract: Introduction. 1. Two Dimensional Systems and Mathematical Preliminaries. 2. Image Perception. 3. Image Sampling and Quantization. 4. Image Transforms. 5. Image Representation by Stochastic Models. 6. Image Enhancement. 7. Image Filtering and Restoration. 8. Image Analysis and Computer Vision. 9. Image Reconstruction From Projections. 10. Image Data Compression.

8,504 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
TL;DR: Eight constructs decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent a cellular valves and intact chamber geometry that could generate pump function in a modified working heart preparation.
Abstract: About 3,000 individuals in the United States are awaiting a donor heart; worldwide, 22 million individuals are living with heart failure. A bioartificial heart is a theoretical alternative to transplantation or mechanical left ventricular support. Generating a bioartificial heart requires engineering of cardiac architecture, appropriate cellular constituents and pump function. We decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent acellular valves and intact chamber geometry. To mimic cardiac cell composition, we reseeded these constructs with cardiac or endothelial cells. To establish function, we maintained eight constructs for up to 28 d by coronary perfusion in a bioreactor that simulated cardiac physiology. By day 4, we observed macroscopic contractions. By day 8, under physiological load and electrical stimulation, constructs could generate pump function (equivalent to about 2% of adult or 25% of 16-week fetal heart function) in a modified working heart preparation.

2,454 citations

Journal ArticleDOI
01 Sep 1997
TL;DR: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment.
Abstract: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification. The motivation for this endeavor stems from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment. In particular the biomedical literature suggests that irises are as distinct as fingerprints or patterns of retinal blood vessels. Further, since the iris is an overt body, its appearance is amenable to remote examination with the aid of a machine vision system. The body of this paper details issues in the design and operation of such systems. For the sake of illustration, extant systems are described in some amount of detail.

2,046 citations


Cites methods from "Image Processing"

  • ...system makes us of an isotropic bandpass decomposition derived from application of Laplacian of Gaussian filters [25], [29] to the image data....

    [...]

  • ...In practice, the filtered image is realized as a Laplacian pyramid [8], [29]....

    [...]

Journal ArticleDOI
TL;DR: This paper identifies some promising techniques for image retrieval according to standard principles and examines implementation procedures for each technique and discusses its advantages and disadvantages.

1,910 citations


Cites background or methods from "Image Processing"

  • ...Structural description of chromosome shape (reprinted from [14])....

    [...]

  • ...Common invariants include (i) geometric invariants such as cross-ratio, length ratio, distance ratio, angle, area [69], triangle [70], invariants from coplanar points [14]; (ii) algebraic invariants such as determinant, eigenvalues [71], trace [14]; (iii) di<erential invariants such as curvature, torsion and Gaussian curvature....

    [...]

  • ...Designers of shape invariants argue that although most of other shape representation techniques are invariant under similarity transformations (rotation, translation and scaling), they depend on viewpoint [14]....

    [...]

  • ...The extracting of the convex hull can use both boundary tracing method [14] and morphological methods [11,15]....

    [...]

  • ...Assuming the shape boundary has been represented as a shape signature z(i), the rth moment mr and central moment r can be estimated as [14]...

    [...]

References
More filters
Journal ArticleDOI
TL;DR: With integrated functionality specifically designed for imaging and kinetic modeling analysis, COMKAT can be used as a software environment for molecular imaging and pharmacokinetic analysis.
Abstract: 77 Objectives To evaluate the role of high resolution quantitative PET/CT with MRI for early monitoring of treatment response to TNF-α inhibitor in human patients with rheumatoid arthritis (RA). Methods A PET/CT extremity scanner has been built at our institution. High resolution fused and co-registered PET/CT images of the wrist are obtained from this device. The patient lies prone on top of the PET/CT gantry, with the most symptomatic hand (locked by an immobilizer) suspended through a hole on the table-top. The hand immobilizer is compatible with MRI allowing the acquisition of clinical MR images at the same time points as extremity PET/CT. To date, three RA patients who were candidates for TNF-α blocker etanercept have been scanned with FDG-PET/CT and MRI at baseline and two at one month after initiation of treatment. This is a 10 patient study and recruitment is ongoing. Results For patient 1 (57f) at 1 month, marked reduction compared to baseline of 20-40% in maximum PET signal intensity (Imax) in the synovium and at sites of erosions, and of 82% in PET metabolic synovial volume was measured. MRI contrast enhancement correlated with this finding. This patient was hence classified as a responder to the drug. Rheumatologist examination at the end of three months confirmed this finding. For patient 2 (63f), an increase of 20-30% in Imax was measured in the synovium and at erosion sites at 1 month compared to baseline. We predict that this patient is a non-responder to the drug. Clinical examination for this patient is pending. Conclusions From initial studies, high resolution FDG-PET/CT with MRI shows significant promise for quantitative monitoring of early response to anti-TNF-α therapy in RA. Research Support This work was funded by the NIH grants UL1-RR024146, R01CA129561, R01EB002138, and the UC Davis Imaging Research Center

20 citations

Journal ArticleDOI
01 Dec 2008
TL;DR: This paper does both tracking of multiple objects in a meeting scenario and online learning to incrementally update the models of the tracked objects to account for appearance changes during tracking.
Abstract: Recently, much work has been done in multiple object tracking on the one hand and on reference model adaptation for a single-object tracker on the other side. In this paper, we do both tracking of multiple objects (faces of people) in a meeting scenario and online learning to incrementally update the models of the tracked objects to account for appearance changes during tracking. Additionally, we automatically initialize and terminate tracking of individual objects based on low-level features, i.e., face color, face size, and object movement. Many methods unlike our approach assume that the target region has been initialized by hand in the first frame. For tracking, a particle filter is incorporated to propagate sample distributions over time. We discuss the close relationship between our implemented tracker based on particle filters and genetic algorithms. Numerous experiments on meeting data demonstrate the capabilities of our tracking approach. Additionally, we provide an empirical verification of the reference model learning during tracking of indoor and outdoor scenes which supports a more robust tracking. Therefore, we report the average of the standard deviation of the trajectories over numerous tracking runs depending on the learning rate.

20 citations

Journal ArticleDOI
TL;DR: Quantitative measurements and computational modelling suggest that dilution of the cytokine Unpaired is a plausible mechanism to explain growth control in the Drosophila eye disc.
Abstract: A fundamental question in developmental biology is how organ size is controlled. We have previously shown that the area growth rate in the Drosophila eye primordium declines inversely proportionally to the increase in its area. How the observed reduction in the growth rate is achieved is unknown. Here, we explore the dilution of the cytokine Unpaired (Upd) as a possible candidate mechanism. In the developing eye, upd expression is transient, ceasing at the time when the morphogenetic furrow first emerges. We confirm experimentally that the diffusion and stability of the JAK/STAT ligand Upd are sufficient to control eye disc growth via a dilution mechanism. We further show that sequestration of Upd by ectopic expression of an inactive form of the receptor Domeless (Dome) results in a substantially lower growth rate, but the area growth rate still declines inversely proportionally to the area increase. This growth rate-to-area relationship is no longer observed when Upd dilution is prevented by the continuous, ectopic expression of Upd. We conclude that a mechanism based on the dilution of the growth modulator Upd can explain how growth termination is controlled in the eye disc.

20 citations

Proceedings Article
27 Apr 2018
TL;DR: Li et al. as discussed by the authors proposed a novel deep hashing method, called supervised hierarchical deep hashing (SHDH), to perform hash code learning for hierarchical labeled data by weighting each level, and designed a deep neural network to obtain a hash code for each data point.
Abstract: Recently, hashing methods have been widely used in large-scale image retrieval. However, most existing supervised hashing methods do not consider the hierarchical relation of labels,which means that they ignored the rich semantic information stored in the hierarchy. Moreover, most of previous works treat each bit in a hash code equally, which does not meet the scenario of hierarchical labeled data. To tackle the aforementioned problems, in this paper, we propose a novel deep hashing method, called supervised hierarchical deep hashing (SHDH), to perform hash code learning for hierarchical labeled data. Specifically, we define a novel similarity formula for hierarchical labeled data by weighting each level, and design a deep neural network to obtain a hash code for each data point. Extensive experiments on two real-world public datasets show that the proposed method outperforms the state-of-the-art baselines in the image retrieval task.

20 citations

Book ChapterDOI
11 Jul 2017
TL;DR: This paper investigates enhancement of monochromatic medical modality into colorized images by improving the contrast of anatomical structures facilitates precise segmentation and proposes a framework for pre-processing to remove noise and improve edge information.
Abstract: Medical images contain precious anatomical information for clinical procedures. Improved understanding of medical modality may contribute significantly in arena of medical image analysis. This paper investigates enhancement of monochromatic medical modality into colorized images. Improving the contrast of anatomical structures facilitates precise segmentation. The proposed framework starts with pre-processing to remove noise and improve edge information. Then colour information is embedded to each pixel of a subject image. A resulting image has a potential to portray better anatomical information than a conventional monochromatic image. To evaluate the performance of colorized medical modality, the structural similarity index and the peak signal to noise ratio are computed. Supremacy of proposed colorization is validated by segmentation experiments and compared with greyscale monochromatic images.

20 citations