scispace - formally typeset
Search or ask a question
Proceedings Article

Image Processing

01 Jan 1994-
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.
Citations
More filters
Journal ArticleDOI
TL;DR: The status of computer vision education today is reviewed to see if there is a strong demand for educating students to become knowledgeable in computer imaging and vision.
Abstract: Computer vision is becoming a mainstream subject of study in computer science and engineering. With the rapid explosion of multimedia and the extensive use of video and image-based communications over the World Wide Web, there is a strong demand for educating students to become knowledgeable in computer imaging and vision. The purpose of this paper is to review the status of computer vision education today.

67 citations


Cites background from "Image Processing"

  • ...Fortunately, in the last few years several new textbooks have been published including [96] and [97]....

    [...]

Journal ArticleDOI
TL;DR: The cationic pathway, which provides the strong Na+ or K+ cations that alkalinize the lumen in anterior midgut then removes them and restores a lower pH in posterior midGut, is considered.
Abstract: Anopheles gambiae larvae ( Diptera: Culicidae ) live in freshwater with low Na+ concentrations yet they use Na+ for alkalinization of the alimentary canal, for electrophoretic amino acid uptake and for nerve function. The metabolic pathway by which larvae accomplish these functions has anionic and cationic components that interact and allow the larva to conserve Na+ while excreting H+ and HCO3–. The anionic pathway consists of a metabolic CO2 diffusion process, carbonic anhydrase and Cl–/HCO3– exchangers; it provides weak HCO3– and weaker CO32– anions to the lumen. The cationic pathway consists of H+ V-ATPases and Na+/H+ antiporters (NHAs), Na+/K+ P-ATPases and Na+/H+ exchangers (NHEs) along with several (Na+ or K+):amino acid+/– symporters, a.k.a. nutrient amino acid transporters (NATs). This paper considers the cationic pathway, which provides the strong Na+ or K+ cations that alkalinize the lumen in anterior midgut then removes them and restores a lower pH in posterior midgut. A key member of the cationic pathway is a Na+/H+ antiporter, which was cloned recently from Anopheles gambiae larvae, localized strategically in plasma membranes of the alimentary canal and named AgNHA1 based upon its phylogeny. A phylogenetic comparison of all cloned NHAs and NHEs revealed that AgNHA1 is the first metazoan NHA to be cloned and localized and that it is in the same clade as electrophoretic prokaryotic NHAs that are driven by the electrogenic H+ F-ATPase. Like prokaryotic NHAs, AgNHA1 is thought to be electrophoretic and to be driven by the electrogenic H+ V-ATPase. Both AgNHA1 and alkalophilic bacterial NHAs face highly alkaline environments; to alkalinize the larva mosquito midgut lumen, AgNHA1, like the bacterial NHAs, would have to move nH+ inwardly and Na+ outwardly. Perhaps the alkaline environment that led to the evolution of electrophoretic prokaryotic NHAs also led to the evolution of an electrophoretic AgNHA1 in mosquito larvae. In support of this hypothesis, antibodies to both AgNHA1 and H+ V-ATPase label the same membranes in An. gambiae larvae. The localization of H+ V-ATPase together with (Na+ or K+):amino acid+/– symporter, AgNAT8, on the same apical membrane in posterior midgut cells constitutes the functional equivalent of an NHE that lowers the pH in the posterior midgut lumen. All NATs characterized to date are Na+ or K+ symporters so the deduction is likely to have wide application. The deduced colocalization of H+ V-ATPase, AgNHA1 and AgNAT8, on this membrane forms a pathway for local cycling of H+ and Na+ in posterior midgut. The local H+ cycle would prevent unchecked acidification of the lumen while the local Na+ cycle would regulate pH and support Na+:amino acid+/– symport. Meanwhile, a long-range Na+ cycle first transfers Na+ from the blood to gastric caeca and anterior midgut lumen where it initiates alkalinization and then returns Na+ from the rectal lumen to the blood, where it prevents loss of Na+ during H+ and HCO3– excretion. The localization of H+ V-ATPase and Na+/K+-ATPase in An. gambiae larvae parallels that reported for Aedes aegypti larvae. The deduced colocalization of the two ATPases along with NHA and NAT in the alimentary canal constitutes a cationic pathway for Na+-conserving midgut alkalinization and de-alkalinization which has never been reported before.

66 citations


Cites methods from "Image Processing"

  • ...The images have been converted to 8 bit gray-scale using ImageJ software (Abramoff et al., 2004) to correct for the background differences due to the different antibodies used (Fig....

    [...]

Journal ArticleDOI
TL;DR: In this article, the Monte Aquila Fault was found to have a small cumulative deformation and subtle geomorphic expression by small scarps/warps, which can be expressed at the surface by small warps.
Abstract: [1] The Southern Apennines range of Italy presents significant challenges for active fault detection due to the complex structural setting inherited from previous contractional tectonics, coupled to very recent (Middle Pleistocene) onset and slow slip rates of active normal faults As shown by the Irpinia Fault, source of a M69 earthquake in 1980, major faults might have small cumulative deformation and subtle geomorphic expression A multidisciplinary study including morphological-tectonic, paleoseismological, and geophysical investigations has been carried out across the extensional Monte Aquila Fault, a poorly known structure that, similarly to the Irpinia Fault, runs across a ridge and is weakly expressed at the surface by small scarps/warps The joint application of shallow reflection profiling, seismic and electrical resistivity tomography, and physical logging of cored sediments has proved crucial for proper fault detection because performance of each technique was markedly different and very dependent on local geologic conditions Geophysical data clearly (1) image a fault zone beneath suspected warps, (2) constrain the cumulative vertical slip to only 25–30 m, (3) delineate colluvial packages suggesting coseismic surface faulting episodes Paleoseismological investigations document at least three deformation events during the very Late Pleistocene (<20 ka) and Holocene The clue to surface-rupturing episodes, together with the fault dimension inferred by geological mapping and microseismicity distribution, suggest a seismogenic potential of M63 Our study provides the second documentation of a major active fault in southern Italy that, as the Irpinia Fault, does not bound a large intermontane basin, but it is nested within the mountain range, weakly modifying the landscape This demonstrates that standard geomorphological approaches are insufficient to define a proper framework of active faults in this region More in general, our applications have wide methodological implications for shallow imaging in complex terrains because they clearly illustrate the benefits of combining electrical resistivity and seismic techniques The proposed multidisciplinary methodology can be effective in regions characterized by young and/or slow slipping active faults

66 citations


Cites background from "Image Processing"

  • ...241 242 4.1 Electrical Resistivity Tomography 243 Shallow faults have become a frequent target of ERT in the last decade (e.g. Suzuki et al., 2000; 244 Caputo et al., 2003; Wise et al., 2003; Nguyen et al., 2005)....

    [...]

  • ...1 Electrical Resistivity Tomography 243 Shallow faults have become a frequent target of ERT in the last decade (e.g. Suzuki et al., 2000; 244 Caputo et al., 2003; Wise et al., 2003; Nguyen et al., 2005)....

    [...]

Journal ArticleDOI
TL;DR: The data suggest that EphA2 may be a promising target for treating and preventing NSCLC and potential mechanisms involving AKT, Src, focal adhesion kinase, Rho guanosine triphosphatases (GTPase), and extracellular signal–regulated kinase (ERK)-1/2.
Abstract: Overexpression of the receptor tyrosine kinase EphA2 occurs in non-small cell lung cancer (NSCLC) and a number of other human cancers. This overexpression correlates with a poor prognosis, smoking, and the presence of Kirsten rat sarcoma (K-Ras) mutations in NSCLC. In other cancers, EphA2 has been implicated in migration and metastasis. To determine if EphA2 can promote NSCLC progression, we examined the relationship of EphA2 with proliferation and migration in cell lines and with metastases in patient tumors. We also examined potential mechanisms involving AKT, Src, focal adhesion kinase, Rho guanosine triphosphatases (GTPase), and extracellular signal-regulated kinase (ERK)-1/2. Knockdown of EphA2 in NSCLC cell lines decreased proliferation (colony size) by 20% to 70% in four of five cell lines (P < 0. 04) and cell migration by 7% to 75% in five of six cell lines (P < 0. 03). ERK1/2 activation correlated with effects on proliferation, and inhibition of ERK1/2 activation also suppressed proliferation. In accordance with the in vitro data, high tumor expression of EphA2 was an independent prognostic factor in time to recurrence (P = 0.057) and time to metastases (P = 0.046) of NSCLC patients. We also examined EphA2 expression in the putative premalignant lung lesion, atypical adenomatous hyperplasia, and the noninvasive bronchioloalveolar component of adenocarcinoma because K-Ras mutations occur in atypical adenomatous hyperplasia and are common in lung adenocarcinomas. Both preinvasive lesion types expressed EphA2, showing its expression in the early pathogenesis of lung adenocarcinoma. Our data suggest that EphA2 may be a promising target for treating and preventing NSCLC.

66 citations

Journal ArticleDOI
TL;DR: The approach presented here serves as a model for a more quantitative analysis of SD-OCT images, allowing for more meaningful comparisons between subjects, clinics and SD- OCT systems.
Abstract: Aims To examine the practical improvement in image quality afforded by a broadband light source in a clinical setting and to define image quality metrics for future use in evaluating spectral domain optical coherence tomography (SD-OCT) images. Methods A commercially available SD-OCT system, configured with a standard source as well as an external broadband light source, was used to acquire 4 mm horizontal line scans of the right eye of 10 normal subjects. Scans were averaged to reduce speckling and multiple retinal layers were analysed in the resulting images. Results For all layers there was a significant improvement in the mean local contrast (average improvement by a factor of 1.66) when using the broadband light source. Intersession variability was shown not to be a major contributing factor to the observed improvement in image quality obtained with the broadband light source. We report the first observation of sublamination within the inner plexiform layer visible with SD-OCT. Conclusion The practical improvement with the broadband light source was significant, although it remains to be seen what the utility will be for diagnostic pathology. The approach presented here serves as a model for a more quantitative analysis of SD-OCT images, allowing for more meaningful comparisons between subjects, clinics and SD-OCT systems.

66 citations

References
More filters
Journal ArticleDOI
01 Nov 1973
TL;DR: These results indicate that the easily computable textural features based on gray-tone spatial dependancies probably have a general applicability for a wide variety of image-classification applications.
Abstract: Texture is one of the important characteristics used in identifying objects or regions of interest in an image, whether the image be a photomicrograph, an aerial photograph, or a satellite image. This paper describes some easily computable textural features based on gray-tone spatial dependancies, and illustrates their application in category-identification tasks of three different kinds of image data: photomicrographs of five kinds of sandstones, 1:20 000 panchromatic aerial photographs of eight land-use categories, and Earth Resources Technology Satellite (ERTS) multispecial imagery containing seven land-use categories. We use two kinds of decision rules: one for which the decision regions are convex polyhedra (a piecewise linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89 percent for the photomicrographs, 82 percent for the aerial photographic imagery, and 83 percent for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications.

20,442 citations

Book
03 Oct 1988
TL;DR: This chapter discusses two Dimensional Systems and Mathematical Preliminaries and their applications in Image Analysis and Computer Vision, as well as image reconstruction from Projections and image enhancement.
Abstract: Introduction. 1. Two Dimensional Systems and Mathematical Preliminaries. 2. Image Perception. 3. Image Sampling and Quantization. 4. Image Transforms. 5. Image Representation by Stochastic Models. 6. Image Enhancement. 7. Image Filtering and Restoration. 8. Image Analysis and Computer Vision. 9. Image Reconstruction From Projections. 10. Image Data Compression.

8,504 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
TL;DR: Eight constructs decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent a cellular valves and intact chamber geometry that could generate pump function in a modified working heart preparation.
Abstract: About 3,000 individuals in the United States are awaiting a donor heart; worldwide, 22 million individuals are living with heart failure. A bioartificial heart is a theoretical alternative to transplantation or mechanical left ventricular support. Generating a bioartificial heart requires engineering of cardiac architecture, appropriate cellular constituents and pump function. We decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent acellular valves and intact chamber geometry. To mimic cardiac cell composition, we reseeded these constructs with cardiac or endothelial cells. To establish function, we maintained eight constructs for up to 28 d by coronary perfusion in a bioreactor that simulated cardiac physiology. By day 4, we observed macroscopic contractions. By day 8, under physiological load and electrical stimulation, constructs could generate pump function (equivalent to about 2% of adult or 25% of 16-week fetal heart function) in a modified working heart preparation.

2,454 citations

Journal ArticleDOI
01 Sep 1997
TL;DR: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment.
Abstract: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification. The motivation for this endeavor stems from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment. In particular the biomedical literature suggests that irises are as distinct as fingerprints or patterns of retinal blood vessels. Further, since the iris is an overt body, its appearance is amenable to remote examination with the aid of a machine vision system. The body of this paper details issues in the design and operation of such systems. For the sake of illustration, extant systems are described in some amount of detail.

2,046 citations