scispace - formally typeset
Search or ask a question
Topic

Real image

About: Real image is a research topic. Over the lifetime, 11765 publications have been published within this topic receiving 255887 citations.


Papers
More filters
Proceedings ArticleDOI
Konstantinos Bousmalis1, Nathan Silberman1, David Dohan1, Dumitru Erhan1, Dilip Krishnan1 
01 Jul 2017
TL;DR: In this paper, a generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain by learning in an unsupervised manner a transformation in the pixel space from one domain to another.
Abstract: Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks. One appealing alternative is rendering synthetic data where ground-truth annotations are generated automatically. Unfortunately, models trained purely on rendered images fail to generalize to real images. To address this shortcoming, prior work introduced unsupervised domain adaptation algorithms that have tried to either map representations between the two domains, or learn to extract features that are domain-invariant. In this work, we approach the problem in a new light by learning in an unsupervised manner a transformation in the pixel space from one domain to the other. Our generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain. Our approach not only produces plausible samples, but also outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Finally, we demonstrate that the adaptation process generalizes to object classes unseen during training.

1,549 citations

Journal ArticleDOI
TL;DR: The focus of this work is on spatial segmentation, where a criterion for "good" segmentation using the class-map is proposed and applying the criterion to local windows in theclass-map results in the "J-image," in which high and low values correspond to possible boundaries and interiors of color-texture regions.
Abstract: A method for unsupervised segmentation of color-texture regions in images and video is presented. This method, which we refer to as JSEG, consists of two independent steps: color quantization and spatial segmentation. In the first step, colors in the image are quantized to several representative classes that can be used to differentiate regions in the image. The image pixels are then replaced by their corresponding color class labels, thus forming a class-map of the image. The focus of this work is on spatial segmentation, where a criterion for "good" segmentation using the class-map is proposed. Applying the criterion to local windows in the class-map results in the "J-image," in which high and low values correspond to possible boundaries and interiors of color-texture regions. A region growing method is then used to segment the image based on the multiscale J-images. A similar approach is applied to video sequences. An additional region tracking scheme is embedded into the region growing process to achieve consistent segmentation and tracking results, even for scenes with nonrigid object motion. Experiments show the robustness of the JSEG algorithm on real images and video.

1,476 citations

Journal ArticleDOI
TL;DR: In this article, a deformable template is used to detect and describe features of faces using deformable templates and an energy function is defined which links edges, peaks, and valleys in the image intensity to corresponding properties of the template.
Abstract: We propose a method for detecting and describing features of faces using deformable templates. The feature of interest, an eye for example, is described by a parameterized template. An energy function is defined which links edges, peaks, and valleys in the image intensity to corresponding properties of the template. The template then interacts dynamically with the image by altering its parameter values to minimize the energy function, thereby deforming itself to find the best fit. The final parametr values can be used as descriptors for the feature. We illustrate this method by showing deformable templates detecting eyes and mouths in real images. We demonstrate their ability for tracking features.

1,375 citations

Journal ArticleDOI
TL;DR: A novel region-based method for image segmentation, which is able to simultaneously segment the image and estimate the bias field, and the estimated bias field can be used for intensity inhomogeneity correction (or bias correction).
Abstract: Intensity inhomogeneity often occurs in real-world images, which presents a considerable challenge in image segmentation. The most widely used image segmentation algorithms are region-based and typically rely on the homogeneity of the image intensities in the regions of interest, which often fail to provide accurate segmentation results due to the intensity inhomogeneity. This paper proposes a novel region-based method for image segmentation, which is able to deal with intensity inhomogeneities in the segmentation. First, based on the model of images with intensity inhomogeneities, we derive a local intensity clustering property of the image intensities, and define a local clustering criterion function for the image intensities in a neighborhood of each point. This local clustering criterion function is then integrated with respect to the neighborhood center to give a global criterion of image segmentation. In a level set formulation, this criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, by minimizing this energy, our method is able to simultaneously segment the image and estimate the bias field, and the estimated bias field can be used for intensity inhomogeneity correction (or bias correction). Our method has been validated on synthetic images and real images of various modalities, with desirable performance in the presence of intensity inhomogeneities. Experiments show that our method is more robust to initialization, faster and more accurate than the well-known piecewise smooth model. As an application, our method has been used for segmentation and bias correction of magnetic resonance (MR) images with promising results.

1,201 citations

Journal ArticleDOI
TL;DR: This paper describes a hierarchical computational framework for the determination of dense displacement fields from a pair of images, and an algorithm consistent with that framework, based on a scale-based separation of the image intensity information and the process of measuring motion.
Abstract: THE ROBUST MEASUREMENT OF VISUAL MOTION FROM DIGITIZED IMAGE SEQUENCES HAS BEEN AN IMPORTANT BUT DIFFICULT PROBLEM IN COMPUTER VISION. THIS PAPER DESCRIBES A HIERARCHICAL COMPUTATIONAL FRAMEWORK FOR THE DETERMINATION OF DENSE DISPLACEMENT FIELDS FROM A PAIR OF IMAGES, AND AN ALGORITHM CONSIST- ENT WITH THAT FRAMEWORK. OUR FRAMEWORK IS BASED ON THE SEPARATION OF THE IMAGE INTENSITY INFORMATION AS WELL AS THE PROCESS OF MEASURING MOTION ACCORDING TO SCALE. THE LARGE SCALE INTENSITY INFORMATION IS FIRST USED TO OBTAIN ROUGH ESTIMATES OF IMAGE MOTION, WHICH ARE THEN REFINED BY USING INTENSITY INFORMATION AT SMALLER SCALES. THE ESTIMATES ARE IN THE FORM OF DISPLACEMENT (OR VELOCITY) VECTORS FOR PIXELS AND ARE ACCOMPANIED BY A DIRECTION-DEPENDENT CONFIDENCE MEASURE. A SMOOTHNESS CONSTRAINT IS EMPLOYED TO PROPAGATE THE MEASUREMENTS WITH HIGH CONFIDENCE TO THEIR NEIGBORING AREAS WHERE THE CONFIDENCES ARE LOW. AT ALL LEVELS, THE COMPUTATIONS ARE PIXEL-PARALLEL, UNIFORM ACROSS THE IMAGE, AND BASED ON INFORMATION FROM A SMALL NEIGHBORHOOD OF A PIXEL. FOR OUR ALGORITHM, THE LOCAL DISPLACEMENT VECTORS ARE DETERMIND BY MINI- MIZING THE SUM-OF-SQUARED DIFFERENCES (SSD) OF INTENSITIES, THE CONFIDENCE MEASURES ARE DERIVED FROM THE SHAPE OF THE SSD SURFACE, AND THE SMOOTHNESS CONSTRAINT IS CAST IN THE FORM OF ENERGY MINIMIZATION. RESULTS OF APPLYING OUR ALGORITHM TO PAIRS OF REAL IMAGES ARE INCLUDED. IN ADDITION TO OUR OWN

1,175 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
94% related
Feature (computer vision)
128.2K papers, 1.7M citations
93% related
Convolutional neural network
74.7K papers, 2M citations
92% related
Feature extraction
111.8K papers, 2.1M citations
92% related
Image processing
229.9K papers, 3.5M citations
92% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20227
2021383
2020545
2019562
2018444
2017413