scispace - formally typeset
Search or ask a question
Proceedings Article

Image Processing

01 Jan 1994-
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.
Citations
More filters
Book ChapterDOI
Nicholas Molton1, Stephen Se1, David Lee1, Penny Probert1, Michael Brady1 
01 Jan 1998
TL;DR: This paper describes ongoing work into a portable mobility aid, worn by the visually impaired, that uses stereo vision and sonar sensors for obstacle avoidance and recognition of kerbs.
Abstract: This paper describes ongoing work into a portable mobility aid, worn by the visually impaired. The system uses stereo vision and sonar sensors for obstacle avoidance and recognition of kerbs. Because the device is carried, the user is given freedom of movement over kerbs, stairs and rough ground, not traversable with a wheeled aid. Motion of the sensor due to the walking action is measured using a digital compass and inclinometer. This motion has been modelled and is tracked to allow compensation of sensor measurements. The vision obstacle detection method uses comparison of image feature disparity with a ground feature disparity function. The disparity function is continually updated by the walk-motion model and by scene ground-plane fitting. Kerb detection is achieved by identifying clusters of parallel lines using the Hough transform. Experimental results are presented from the vision and sonar parts of the system.

14 citations


Cites methods from "Image Processing"

  • ...As a result, parallel lines in the world project to parallel lines in the image, so we can use the Hough Transform [Sonka et al., 1993] to detect clusters of parallel lines in the image as evidence for a kerb....

    [...]

Dissertation
01 Dec 2003
TL;DR: In this paper, the authors examined the technology of THz pulsed imaging, together with the imaging modalities that are employed and the type of data that are acquired, and demonstrated that clustering algorithms in time, frequency, and time-frequency based feature spaces demonstrate that such tools have potential application in the segmentation of THZ images into their constituent regions.
Abstract: Terahertz (THz) radiation is abundant in the natural world yet very hard to harness in the laboratory. Forming the boundary between `radio' and `light', the so called "terahertz gap" results from the failure of optical techniques to operate below a few hundred terahertz, and likewise the failure of electronic methods to operate above a few hundred gigahertz. However, recent advances in opto-electronic and semiconductor technology have enabled bright THz radiation to be coherently generated and detected, and THz imaging systems are now commercially available, if still very expensive. Terahertz pulsed imaging data are unusual in that an entire time series is `behind' every pixel of the image. While resulting in rich data sets, this high dimensionality necessitates some form of distillation or extraction of pertinent features before images can be formed. Within this thesis the technology of THz pulsed imaging is examined, together with the imaging modalities that are employed and the type of data that are acquired. The sources of noise are categorised, and it is demonstrated that this noise can be modelled by the family of stable distributions, but that it is neither normally distributed nor distributed according to a simple mixture of Gaussians. Joint time-frequency techniques such as those used in RADAR or ultrasound - windowed Fourier transforms and wavelet transforms - are applied to THz data, and are shown to be appropriate tools to use when analysing and processing THz pulses, particularly in signal compression. Finally, clustering algorithms in time, frequency, and time-frequency based feature spaces demonstrate that such tools have potential application in the segmentation of THz images into their constituent regions. The analyses herein improve our understanding of the nature of THz data, and the techniques developed are steps along the road to move THz imaging into real world applications, such as dental and medical imaging and diagnosis.

14 citations


Cites background from "Image Processing"

  • ...The theory of Fourier series and transforms is described in more depth elsewhere (for example [29]), and their application to image and signal processing is also described elsewhere (for example [59])....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors used optical flow and particle tracking algorithms to measure mitochondrial movement in primary cultured cortical and hippocampal neurons, and they were able to generate complete descriptions of movement profiles in an automated fashion of hundreds of thousands of mitochondria with a processing time of approximately one hour.
Abstract: There is growing recognition that fast mitochondrial transport in neurons is disrupted in multiple neurological diseases and psychiatric disorders. However, a major constraint in identifying novel therapeutics based on mitochondrial transport is that the large-scale analysis of fast transport is time consuming. Here we describe methodologies for the automated analysis of fast mitochondrial transport from data acquired using a robotic microscope. We focused on addressing questions of measurement precision, speed, reliably, workflow ease, statistical processing, and presentation. We used optical flow and particle tracking algorithms, implemented in ImageJ, to measure mitochondrial movement in primary cultured cortical and hippocampal neurons. With it, we are able to generate complete descriptions of movement profiles in an automated fashion of hundreds of thousands of mitochondria with a processing time of approximately one hour. We describe the calibration of the parameters of the tracking algorithms and demonstrate that they are capable of measuring the fast transport of a single mitochondrion. We then show that the methods are capable of reliably measuring the inhibition of fast mitochondria transport induced by the disruption of microtubules with the drug nocodazole in both hippocampal and cortical neurons. This work lays the foundation for future large-scale screens designed to identify compounds that modulate mitochondrial motility.

14 citations

Journal ArticleDOI
24 Sep 2020-Energies
TL;DR: This study conducts saturation modeling in a gas hydrate (GH) sand sample with X-ray CT images using the following machine learning algorithms: random forest (RF), convolutional neural network (CNN), and support vector machine (SVM).
Abstract: This study conducts saturation modeling in a gas hydrate (GH) sand sample with X-ray CT images using the following machine learning algorithms: random forest (RF), convolutional neural network (CNN), and support vector machine (SVM). The RF yields the best prediction performance for water, gas, and GH saturation in the samples among the three methods. The CNN and SVM also exhibit sufficient performances under the restricted conditions, but require improvements to their reliability and overall prediction performance. Furthermore, the RF yields the lowest mean square error and highest correlation coefficient between the original and predicted datasets. Although the GH CT images aid in approximately understanding how fluids act in a GH sample, difficulties were encountered in accurately understanding the behavior of GH in a GH sample during the experiments owing to limited physical conditions. Therefore, the proposed saturation modeling method can aid in understanding the behavior of GH in a GH sample in real-time with the use of an appropriate machine learning method. Furthermore, highly accurate descriptions of each saturation, obtained from the proposed method, lead to an accurate resource evaluation and well-guided optimal depressurization for a target GH field production.

14 citations

Journal ArticleDOI
TL;DR: The present study confirms the effectiveness of using excised leaves on agar and suggests that this method could be applied to the rearing of other aphids, phytophagous mites, leaf miners and leaf‐gall formers.
Abstract: The present study evaluated the effectiveness of an aphid‐rearing method devised by Milner in 1981 using Acyrthosiphon pisum and its host plant Vicia faba. In the “agar‐leaf method,” excised leaves of V. faba were attached to the surface of 1% agar gel containing nutrient solution, and test aphids were transferred onto the leaves. Excised leaves grew in size and weight on the agar medium. Fecundity, longevity, body size and developmental time to adulthood were compared between aphids reared using the agar‐leaf method vs. those reared on V. faba seedlings under the same conditions. No significant difference was detected between the two treatments for any of the four parameters, suggesting that the aphids grew and reproduced on excised leaves as successfully as on V. faba seedlings. This method was also useful for inducing males and oviparous females at lower temperature and in short days. Therefore, the present study confirms the effectiveness of using excised leaves on agar and suggests that this method could be applied to the rearing of other aphids, phytophagous mites, leaf miners and leaf‐gall formers.

14 citations


Cites methods from "Image Processing"

  • ...Seven days after the start of the experiment, leaf area was measured using ImageJ version 1.50i (Abràmoff et al. 2004) after leaf images were captured to a computer, and fresh and dry leaf weight was measured for both treatments....

    [...]

References
More filters
Journal ArticleDOI
01 Nov 1973
TL;DR: These results indicate that the easily computable textural features based on gray-tone spatial dependancies probably have a general applicability for a wide variety of image-classification applications.
Abstract: Texture is one of the important characteristics used in identifying objects or regions of interest in an image, whether the image be a photomicrograph, an aerial photograph, or a satellite image. This paper describes some easily computable textural features based on gray-tone spatial dependancies, and illustrates their application in category-identification tasks of three different kinds of image data: photomicrographs of five kinds of sandstones, 1:20 000 panchromatic aerial photographs of eight land-use categories, and Earth Resources Technology Satellite (ERTS) multispecial imagery containing seven land-use categories. We use two kinds of decision rules: one for which the decision regions are convex polyhedra (a piecewise linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89 percent for the photomicrographs, 82 percent for the aerial photographic imagery, and 83 percent for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications.

20,442 citations

Book
03 Oct 1988
TL;DR: This chapter discusses two Dimensional Systems and Mathematical Preliminaries and their applications in Image Analysis and Computer Vision, as well as image reconstruction from Projections and image enhancement.
Abstract: Introduction. 1. Two Dimensional Systems and Mathematical Preliminaries. 2. Image Perception. 3. Image Sampling and Quantization. 4. Image Transforms. 5. Image Representation by Stochastic Models. 6. Image Enhancement. 7. Image Filtering and Restoration. 8. Image Analysis and Computer Vision. 9. Image Reconstruction From Projections. 10. Image Data Compression.

8,504 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
TL;DR: Eight constructs decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent a cellular valves and intact chamber geometry that could generate pump function in a modified working heart preparation.
Abstract: About 3,000 individuals in the United States are awaiting a donor heart; worldwide, 22 million individuals are living with heart failure. A bioartificial heart is a theoretical alternative to transplantation or mechanical left ventricular support. Generating a bioartificial heart requires engineering of cardiac architecture, appropriate cellular constituents and pump function. We decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent acellular valves and intact chamber geometry. To mimic cardiac cell composition, we reseeded these constructs with cardiac or endothelial cells. To establish function, we maintained eight constructs for up to 28 d by coronary perfusion in a bioreactor that simulated cardiac physiology. By day 4, we observed macroscopic contractions. By day 8, under physiological load and electrical stimulation, constructs could generate pump function (equivalent to about 2% of adult or 25% of 16-week fetal heart function) in a modified working heart preparation.

2,454 citations

Journal ArticleDOI
01 Sep 1997
TL;DR: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment.
Abstract: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification. The motivation for this endeavor stems from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment. In particular the biomedical literature suggests that irises are as distinct as fingerprints or patterns of retinal blood vessels. Further, since the iris is an overt body, its appearance is amenable to remote examination with the aid of a machine vision system. The body of this paper details issues in the design and operation of such systems. For the sake of illustration, extant systems are described in some amount of detail.

2,046 citations