scispace - formally typeset
Search or ask a question
Topic

Standard test image

About: Standard test image is a research topic. Over the lifetime, 5217 publications have been published within this topic receiving 98486 citations.


Papers
More filters
Posted Content
TL;DR: In this article, a semantic-aware generative adversarial network (GAN) is proposed to transform the test image into the appearance of the source domain, with the semantic structural information being well preserved.
Abstract: In spite of the compelling achievements that deep neural networks (DNNs) have made in medical image computing, these deep models often suffer from degraded performance when being applied to new test datasets with domain shift. In this paper, we present a novel unsupervised domain adaptation approach for segmentation tasks by designing semantic-aware generative adversarial networks (GANs). Specifically, we transform the test image into the appearance of source domain, with the semantic structural information being well preserved, which is achieved by imposing a nested adversarial learning in semantic label space. In this way, the segmentation DNN learned from the source domain is able to be directly generalized to the transformed test image, eliminating the need of training a new model for every new target dataset. Our domain adaptation procedure is unsupervised, without using any target domain labels. The adversarial learning of our network is guided by a GAN loss for mapping data distributions, a cycle-consistency loss for retaining pixel-level content, and a semantic-aware loss for enhancing structural information. We validated our method on two different chest X-ray public datasets for left/right lung segmentation. Experimental results show that the segmentation performance of our unsupervised approach is highly competitive with the upper bound of supervised transfer learning.

70 citations

Journal ArticleDOI
TL;DR: A robust synthetic aperture radar (SAR) automatic target recognition method based on the 3-D scattering center model, which can efficiently predict the 2- D scattering centers as well as the scattering filed of the target at arbitrary poses is proposed.
Abstract: This paper proposes a robust synthetic aperture radar (SAR) automatic target recognition method based on the 3-D scattering center model. The 3-D scattering center model is established offline from the CAD model of the target using a forward method, which can efficiently predict the 2-D scattering centers as well as the scattering filed of the target at arbitrary poses. For the SAR images to be classified, the 2-D scattering centers are extracted based on the attributed scattering center model and matched with the predicted scattering center set using a neighbor matching algorithm. The selected model scattering centers are used to reconstruct an SAR image based on the 3-D scattering center model, which is compared with the test image to reach a robust similarity. The designed similarity measure comprehensively considers the image correlation between the test image and the model reconstructed image and the model redundancy as for describing the test image. As for target recognition, the model with the highest similarity is determined to the target type of the test SAR image when it is denied to be an outlier. Experiments are conducted on both the data simulated by an electromagnetic code and the data measured in the moving and stationary target acquisition recognition program under standard operating condition and various extended operating conditions to validate the effectiveness and robustness of the proposed method.

70 citations

Proceedings ArticleDOI
TL;DR: In this paper, the authors propose to solve image tagging by estimating the principal direction for an image, and exploit linear mappings and nonlinear deep neural networks to approximate the principal directions from an input image.
Abstract: The well-known word analogy experiments show that the recent word vectors capture fine-grained linguistic regularities in words by linear vector offsets, but it is unclear how well the simple vector offsets can encode visual regularities over words. We study a particular image-word relevance relation in this paper. Our results show that the word vectors of relevant tags for a given image rank ahead of the irrelevant tags, along a principal direction in the word vector space. Inspired by this observation, we propose to solve image tagging by estimating the principal direction for an image. Particularly, we exploit linear mappings and nonlinear deep neural networks to approximate the principal direction from an input image. We arrive at a quite versatile tagging model. It runs fast given a test image, in constant time w.r.t.\ the training set size. It not only gives superior performance for the conventional tagging task on the NUS-WIDE dataset, but also outperforms competitive baselines on annotating images with previously unseen tags

70 citations

Proceedings ArticleDOI
Lin Zhang1, Lijun Zhang1, Xiao Liu1, Ying Shen1, Shaoming Zhang1, Shengjie Zhao1 
15 Oct 2019
TL;DR: This paper proposes a "zero-shot" scheme for back-lit image restoration, which exploits the power of deep learning, but does not rely on any prior image examples or prior training, and is the first unsupervised CNN-based back- lit image restoration method.
Abstract: How to restore back-lit images still remains a challenging task. State-of-the-art methods in this field are based on supervised learning and thus they are usually restricted to specific training data. In this paper, we propose a "zero-shot" scheme for back-lit image restoration, which exploits the power of deep learning, but does not rely on any prior image examples or prior training. Specifically, we train a small image-specific CNN, namely ExCNet (short for Exposure Correction Network) at test time, to estimate the "S-curve" that best fits the test back-lit image. Once the S-curve is estimated, the test image can be then restored straightforwardly. ExCNet can adapt itself to different settings per image. This makes our approach widely applicable to different shooting scenes and kinds of back-lighting conditions. Statistical studies performed on 1512 real back-lit images demonstrate that our approach can outperform the competitors by a large margin. To the best of our knowledge, our scheme is the first unsupervised CNN-based back-lit image restoration method. To make the results reproducible, the source code is available at https://cslinzhang.github.io/ExCNet/.

70 citations

Proceedings Article
26 Sep 2008
TL;DR: A novel approach on objective non-reference image fusion performance assessment that takes into account local measurements to estimate how well the important information in the source images is represented by the fused image.
Abstract: We present a novel approach on objective non-reference image fusion performance assessment. The Global-Local Image Quality Analysis (GLIQA) approach takes into account local measurements to estimate how well the important information in the source images is represented by the fused image. The metric is an extended version of the Universal Image Quality Index (UIQI) and uses the similarity between blocks of pixels in the input images and the fused image as the weighting factors. When the difference of an image pixel in the input images and its correspondence in the fused image is larger than a threshold and difficult to assess the fusion quality, global measurements will be applied to assist the judgment. The global measurement metric considers a set of properties of human Gestalt visual perception, such as image structure, texture, and spectral signature, for image quality assessment. Preliminary study results confirm that the performance scores of the proposed metrics correlate well with the subjective quality of the fused images.

69 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
91% related
Image segmentation
79.6K papers, 1.8M citations
91% related
Image processing
229.9K papers, 3.5M citations
90% related
Convolutional neural network
74.7K papers, 2M citations
90% related
Support vector machine
73.6K papers, 1.7M citations
90% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20231
20228
2021130
2020232
2019321
2018293