scispace - formally typeset
Search or ask a question
Proceedings Article

Image Processing

01 Jan 1994-
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.
Citations
More filters
Proceedings Article
20 Feb 2017
TL;DR: Label distribution learning forests (LDLFs) as discussed by the authors is a label distribution learning algorithm based on differentiable decision trees, which have the potential to model any general form of label distributions by a mixture of leaf node predictions.
Abstract: Label distribution learning (LDL) is a general learning framework, which assigns to an instance a distribution over a set of labels rather than a single label or multiple labels. Current LDL methods have either restricted assumptions on the expression form of the label distribution or limitations in representation learning, e.g., to learn deep features in an end-to-end manner. This paper presents label distribution learning forests (LDLFs) - a novel label distribution learning algorithm based on differentiable decision trees, which have several advantages: 1) Decision trees have the potential to model any general form of label distributions by a mixture of leaf node predictions. 2) The learning of differentiable decision trees can be combined with representation learning. We define a distribution-based loss function for a forest, enabling all the trees to be learned jointly, and show that an update function for leaf node predictions, which guarantees a strict decrease of the loss function, can be derived by variational bounding. The effectiveness of the proposed LDLFs is verified on several LDL tasks and a computer vision application, showing significant improvements to the state-of-the-art LDL methods.

62 citations

Journal ArticleDOI
TL;DR: It is demonstrated, for the first time, that ultrasound can be used to noninvasively and nondestructively monitor and evaluate the phase inversion process of in situ forming drug delivery implants, and that the formation process can be directly related to the initial phase of drug release dependent on this formation.

62 citations


Cites methods from "Image Processing"

  • ...For in vivo analysis, the implants were manually segmented (five images for each implant), and a threshold valuewas selected using themethod of mixed Gaussians after the ROI was isolated in order to remove low intensity noise....

    [...]

  • ...First, for in vitro analysis the region of interest (ROI) was isolated by using a parametric intensity based segmentationmethod ofmixed Gaussians [30]....

    [...]

Journal ArticleDOI
TL;DR: It is concluded that Lef1 function is not required for initial primordium organization or migration, but is necessary for proto-NM renewal during later stages of pLL formation, which revealed a novel role for the Wnt signaling pathway during mechanosensory organ formation in zebrafish.
Abstract: The zebrafish posterior lateral line (pLL) is a sensory system that comprises clusters of mechanosensory organs called neuromasts (NMs) that are stereotypically positioned along the surface of the trunk. The NMs are deposited by a migrating pLL primordium, which is organized into polarized rosettes (proto-NMs). During migration, mature proto-NMs are deposited from the trailing part of the primordium, while progenitor cells in the leading part give rise to new proto-NMs. Wnt signaling is active in the leading zone of the primordium and global Wnt inactivation leads to dramatic disorganization of the primordium and a loss of proto-NM formation. However, the exact cellular events that are regulated by the Wnt pathway are not known. We identified a mutant strain, lef1nl2, that contains a lesion in the Wnt effector gene lef1. lef1nl2 mutants lack posterior NMs and live imaging reveals that rosette renewal fails during later stages of migration. Surprisingly, the overall primordium patterning, as assayed by the expression of various markers, appears unaltered in lef1nl2 mutants. Lineage tracing and mosaic analyses revealed that the leading cells (presumptive progenitors) move out of the primordium and are incorporated into NMs; this results in a decrease in the number of proliferating progenitor cells and eventual primordium disorganization. We concluded that Lef1 function is not required for initial primordium organization or migration, but is necessary for proto-NM renewal during later stages of pLL formation. These findings revealed a novel role for the Wnt signaling pathway during mechanosensory organ formation in zebrafish.

61 citations


Additional excerpts

  • ...Images were processed using ImageJ software (Abramoff et al., 2004)....

    [...]

Journal ArticleDOI
TL;DR: The formation and division of MPs and multiple roles for Notch signaling in midline cell development are described, providing a foundation for comprehensive molecular analyses.
Abstract: The study of how transcriptional control and cell signaling influence neurons and glia to acquire their differentiated properties is fundamental to understanding CNS development and function. The Drosophila CNS midline cells are an excellent system for studying these issues because they consist of a small population of diverse cells with well-defined gene expression profiles. In this paper, the origins and differentiation of midline neurons and glia were analyzed. Midline precursor (MP) cells each divide once giving rise to two neurons; here, we use a combination of single-cell gene expression mapping and time-lapse imaging to identify individual MPs, their locations, movements and stereotyped patterns of division. The role of Notch signaling was investigated by analyzing 37 midline-expressed genes in Notch pathway mutant and misexpression embryos. Notch signaling had opposing functions: it inhibited neurogenesis in MP1,3,4 and promoted neurogenesis in MP5,6. Notch signaling also promoted midline glial and median neuroblast cell fate. This latter result suggests that the median neuroblast resembles brain neuroblasts that require Notch signaling, rather than nerve cord neuroblasts, the formation of which is inhibited by Notch signaling. Asymmetric MP daughter cell fates also depend on Notch signaling. One member of each pair of MP3-6 daughter cells was responsive to Notch signaling. By contrast, the other daughter cell asymmetrically acquired Numb, which inhibited Notch signaling, leading to a different fate choice. In summary, this paper describes the formation and division of MPs and multiple roles for Notch signaling in midline cell development, providing a foundation for comprehensive molecular analyses.

61 citations


Cites methods from "Image Processing"

  • ...Image processing with ImageJ....

    [...]

  • ...D E V E LO P M E N T of Cas staining intensity was measured using the Mean Gray Value (MGV) function of ImageJ (Abramoff et al., 2004)....

    [...]

Proceedings ArticleDOI
23 Jan 2004
TL;DR: The issues encountered and problems addressed in the MEMORIAL project are presented, whose goal is the establishment of a digital document workbench enabling the creation of distributed virtual archives based on documents existing in libraries, archives, museums, memorials, and public record offices.
Abstract: Complete collections of invaluable documents of unique historical and political significance are decaying and at the same time they are virtually inaccessible, necessitating the invention of robust and efficient methods for their conversion into a searchable electronic form. We present the issues encountered and problems addressed in the MEMORIAL project, whose goal is the establishment of a digital document workbench enabling the creation of distributed virtual archives based on documents existing in libraries, archives, museums, memorials, and public record offices. Successful approaches are described in the context of the chosen data class: a variety of typewritten documents containing personal information relating to the presence of individuals in World War II Nazi concentration camps.

61 citations


Cites methods from "Image Processing"

  • ...This initial decision was made after consideration of a number of alternatives (including variants of histogram equalisation techniques [3] and Weszka and Rosenfeld’s [4] approach)....

    [...]

References
More filters
Journal ArticleDOI
01 Nov 1973
TL;DR: These results indicate that the easily computable textural features based on gray-tone spatial dependancies probably have a general applicability for a wide variety of image-classification applications.
Abstract: Texture is one of the important characteristics used in identifying objects or regions of interest in an image, whether the image be a photomicrograph, an aerial photograph, or a satellite image. This paper describes some easily computable textural features based on gray-tone spatial dependancies, and illustrates their application in category-identification tasks of three different kinds of image data: photomicrographs of five kinds of sandstones, 1:20 000 panchromatic aerial photographs of eight land-use categories, and Earth Resources Technology Satellite (ERTS) multispecial imagery containing seven land-use categories. We use two kinds of decision rules: one for which the decision regions are convex polyhedra (a piecewise linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89 percent for the photomicrographs, 82 percent for the aerial photographic imagery, and 83 percent for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications.

20,442 citations

Book
03 Oct 1988
TL;DR: This chapter discusses two Dimensional Systems and Mathematical Preliminaries and their applications in Image Analysis and Computer Vision, as well as image reconstruction from Projections and image enhancement.
Abstract: Introduction. 1. Two Dimensional Systems and Mathematical Preliminaries. 2. Image Perception. 3. Image Sampling and Quantization. 4. Image Transforms. 5. Image Representation by Stochastic Models. 6. Image Enhancement. 7. Image Filtering and Restoration. 8. Image Analysis and Computer Vision. 9. Image Reconstruction From Projections. 10. Image Data Compression.

8,504 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
TL;DR: Eight constructs decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent a cellular valves and intact chamber geometry that could generate pump function in a modified working heart preparation.
Abstract: About 3,000 individuals in the United States are awaiting a donor heart; worldwide, 22 million individuals are living with heart failure. A bioartificial heart is a theoretical alternative to transplantation or mechanical left ventricular support. Generating a bioartificial heart requires engineering of cardiac architecture, appropriate cellular constituents and pump function. We decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent acellular valves and intact chamber geometry. To mimic cardiac cell composition, we reseeded these constructs with cardiac or endothelial cells. To establish function, we maintained eight constructs for up to 28 d by coronary perfusion in a bioreactor that simulated cardiac physiology. By day 4, we observed macroscopic contractions. By day 8, under physiological load and electrical stimulation, constructs could generate pump function (equivalent to about 2% of adult or 25% of 16-week fetal heart function) in a modified working heart preparation.

2,454 citations

Journal ArticleDOI
01 Sep 1997
TL;DR: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment.
Abstract: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification. The motivation for this endeavor stems from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment. In particular the biomedical literature suggests that irises are as distinct as fingerprints or patterns of retinal blood vessels. Further, since the iris is an overt body, its appearance is amenable to remote examination with the aid of a machine vision system. The body of this paper details issues in the design and operation of such systems. For the sake of illustration, extant systems are described in some amount of detail.

2,046 citations