scispace - formally typeset
Search or ask a question
Proceedings Article

Image Processing

01 Jan 1994-
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.
Citations
More filters
Journal ArticleDOI
TL;DR: A novel method is described, SiteEngine, that assumes no sequence or fold similarities and is able to recognize proteins that have similar binding sites and may perform similar functions, and which may aid in assigning a function and in classification of binding patterns.

276 citations

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a new color transform model to find important "vehicle color" for quickly locating possible vehicle candidates, and three important features including corners, edge maps, and coefficients of wavelet transforms, are used for constructing a cascade multichannel classifier.
Abstract: This paper presents a novel vehicle detection approach for detecting vehicles from static images using color and edges. Different from traditional methods, which use motion features to detect vehicles, this method introduces a new color transform model to find important "vehicle color" for quickly locating possible vehicle candidates. Since vehicles have various colors under different weather and lighting conditions, seldom works were proposed for the detection of vehicles using colors. The proposed new color transform model has excellent capabilities to identify vehicle pixels from background, even though the pixels are lighted under varying illuminations. After finding possible vehicle candidates, three important features, including corners, edge maps, and coefficients of wavelet transforms, are used for constructing a cascade multichannel classifier. According to this classifier, an effective scanning can be performed to verify all possible candidates quickly. The scanning process can be quickly achieved because most background pixels are eliminated in advance by the color feature. Experimental results show that the integration of global color features and local edge features is powerful in the detection of vehicles. The average accuracy rate of vehicle detection is 94.9%

262 citations

Journal ArticleDOI
TL;DR: A case study for neural network inference in FPGAs focusing on a classifier for jet substructure which would enable, among many other physics scenarios, searches for new dark sector particles and novel measurements of the Higgs boson.
Abstract: Recent results at the Large Hadron Collider (LHC) have pointed to enhanced physics capabilities through the improvement of the real-time event processing techniques. Machine learning methods are ubiquitous and have proven to be very powerful in LHC physics, and particle physics as a whole. However, exploration of the use of such techniques in low-latency, low-power FPGA (Field Programmable Gate Array) hardware has only just begun. FPGA-based trigger and data acquisition systems have extremely low, sub-microsecond latency requirements that are unique to particle physics. We present a case study for neural network inference in FPGAs focusing on a classifier for jet substructure which would enable, among many other physics scenarios, searches for new dark sector particles and novel measurements of the Higgs boson. While we focus on a specific example, the lessons are far-reaching. A companion compiler package for this work is developed based on High-Level Synthesis (HLS) called hls4ml to build machine learning models in FPGAs. The use of HLS increases accessibility across a broad user community and allows for a drastic decrease in firmware development time. We map out FPGA resource usage and latency versus neural network hyperparameters to identify the problems in particle physics that would benefit from performing neural network inference with FPGAs. For our example jet substructure model, we fit well within the available resources of modern FPGAs with a latency on the scale of 100 ns.

253 citations

Journal ArticleDOI
TL;DR: Abscisic acid (ABA) signaling is necessary for the successful adjustment of the leaf to repeated episodes of high light and involves maintenance of photochemical quenching, which is required for dissipation of excess excitation energy.
Abstract: Previously, it has been shown that Arabidopsis thaliana leaves exposed to high light accumulate hydrogen peroxide (H2O2) in bundle sheath cell (BSC) chloroplasts as part of a retrograde signaling network that induces ASCORBATE PEROXIDASE2 (APX2). Abscisic acid (ABA) signaling has been postulated to be involved in this network. To investigate the proposed role of ABA, a combination of physiological, pharmacological, bioinformatic, and molecular genetic approaches was used. ABA biosynthesis is initiated in vascular parenchyma and activates a signaling network in neighboring BSCs. This signaling network includes the Gα subunit of the heterotrimeric G protein complex, the OPEN STOMATA1 protein kinase, and extracellular H2O2, which together coordinate with a redox-retrograde signal from BSC chloroplasts to activate APX2 expression. High light–responsive genes expressed in other leaf tissues are subject to a coordination of chloroplast retrograde signaling and transcellular signaling activated by ABA synthesized in vascular cells. ABA is necessary for the successful adjustment of the leaf to repeated episodes of high light. This process involves maintenance of photochemical quenching, which is required for dissipation of excess excitation energy.

250 citations

Journal ArticleDOI
TL;DR: It is proposed that upon ethylene signaling, FIT is less susceptible to proteasomal degradation, presumably due to a physical interaction between FIT and EIN3/EIL1, one of the signals that triggers Fe deficiency responses at the transcriptional and posttranscriptional levels.
Abstract: Understanding the regulation of key genes involved in plant iron acquisition is of crucial importance for breeding of micronutrient-enriched crops. The basic helix-loop-helix protein FER-LIKE FE DEFICIENCY-INDUCED TRANSCRIPTION FACTOR (FIT), a central regulator of Fe acquisition in roots, is regulated by environmental cues and internal requirements for iron at the transcriptional and posttranscriptional levels. The plant stress hormone ethylene promotes iron acquisition, but the molecular basis for this remained unknown. Here, we demonstrate a direct molecular link between ethylene signaling and FIT. We identified ETHYLENE INSENSITIVE3 (EIN3) and ETHYLENE INSENSITIVE3-LIKE1 (EIL1) in a screen for direct FIT interaction partners and validated their physical interaction in planta. We demonstrate that the ein3 eil1 transcriptome was affected to a greater extent upon iron deficiency than normal iron compared with the wild type. Ethylene signaling by way of EIN3/EIL1 was required for full-level FIT accumulation. FIT levels were reduced upon application of aminoethoxyvinylglycine and in the ein3 eil1 background. MG132 could restore FIT levels. We propose that upon ethylene signaling, FIT is less susceptible to proteasomal degradation, presumably due to a physical interaction between FIT and EIN3/EIL1. Increased FIT abundance then leads to the high level of expression of genes required for Fe acquisition. This way, ethylene is one of the signals that triggers Fe deficiency responses at the transcriptional and posttranscriptional levels.

243 citations

References
More filters
Journal ArticleDOI
01 Nov 1973
TL;DR: These results indicate that the easily computable textural features based on gray-tone spatial dependancies probably have a general applicability for a wide variety of image-classification applications.
Abstract: Texture is one of the important characteristics used in identifying objects or regions of interest in an image, whether the image be a photomicrograph, an aerial photograph, or a satellite image. This paper describes some easily computable textural features based on gray-tone spatial dependancies, and illustrates their application in category-identification tasks of three different kinds of image data: photomicrographs of five kinds of sandstones, 1:20 000 panchromatic aerial photographs of eight land-use categories, and Earth Resources Technology Satellite (ERTS) multispecial imagery containing seven land-use categories. We use two kinds of decision rules: one for which the decision regions are convex polyhedra (a piecewise linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89 percent for the photomicrographs, 82 percent for the aerial photographic imagery, and 83 percent for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications.

20,442 citations

Book
03 Oct 1988
TL;DR: This chapter discusses two Dimensional Systems and Mathematical Preliminaries and their applications in Image Analysis and Computer Vision, as well as image reconstruction from Projections and image enhancement.
Abstract: Introduction. 1. Two Dimensional Systems and Mathematical Preliminaries. 2. Image Perception. 3. Image Sampling and Quantization. 4. Image Transforms. 5. Image Representation by Stochastic Models. 6. Image Enhancement. 7. Image Filtering and Restoration. 8. Image Analysis and Computer Vision. 9. Image Reconstruction From Projections. 10. Image Data Compression.

8,504 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
TL;DR: Eight constructs decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent a cellular valves and intact chamber geometry that could generate pump function in a modified working heart preparation.
Abstract: About 3,000 individuals in the United States are awaiting a donor heart; worldwide, 22 million individuals are living with heart failure. A bioartificial heart is a theoretical alternative to transplantation or mechanical left ventricular support. Generating a bioartificial heart requires engineering of cardiac architecture, appropriate cellular constituents and pump function. We decellularized hearts by coronary perfusion with detergents, preserved the underlying extracellular matrix, and produced an acellular, perfusable vascular architecture, competent acellular valves and intact chamber geometry. To mimic cardiac cell composition, we reseeded these constructs with cardiac or endothelial cells. To establish function, we maintained eight constructs for up to 28 d by coronary perfusion in a bioreactor that simulated cardiac physiology. By day 4, we observed macroscopic contractions. By day 8, under physiological load and electrical stimulation, constructs could generate pump function (equivalent to about 2% of adult or 25% of 16-week fetal heart function) in a modified working heart preparation.

2,454 citations

Journal ArticleDOI
01 Sep 1997
TL;DR: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment.
Abstract: This paper examines automated iris recognition as a biometrically based technology for personal identification and verification. The motivation for this endeavor stems from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment. In particular the biomedical literature suggests that irises are as distinct as fingerprints or patterns of retinal blood vessels. Further, since the iris is an overt body, its appearance is amenable to remote examination with the aid of a machine vision system. The body of this paper details issues in the design and operation of such systems. For the sake of illustration, extant systems are described in some amount of detail.

2,046 citations