scispace - formally typeset
Search or ask a question
Topic

Standard test image

About: Standard test image is a research topic. Over the lifetime, 5217 publications have been published within this topic receiving 98486 citations.


Papers
More filters
Patent
24 Sep 2004
TL;DR: An image combining method for combining an image obtained by image sensing real space with a computer-generated image and displaying the combined image is described in this paper. But the method is not suitable for the automatic generation of images.
Abstract: An image combining method for combining an image obtained by image sensing real space with a computer-generated image and displaying the combined image. Mask area color information is determined based on a first real image including an object as the subject of mask area and a second real image not including the object, and the color information is registered. The mask area is extracted from the real image by using the registered mask area color information, and the real image and the computer-generated image are combined by using the mask area.

31 citations

Patent
Hu Shane Ching-Feng1
05 Nov 1998
TL;DR: In this paper, a high precision sub-pixel spatial alignment of digital images, one from a reference video signal and another from a corresponding test video signal, uses an iterative process and incorporates spatial resampling along with basic correlation and estimation of fractional pixel shift.
Abstract: A high precision sub-pixel spatial alignment of digital images, one from a reference video signal and another from a corresponding test video signal, uses an iterative process and incorporates spatial resampling along with basic correlation and estimation of fractional pixel shift. The corresponding images from the reference and test video signals are captured and a test block is overlaid on them at the same locations to include texture from the images. FFTs are performed within the test block in each image, and the FFTs are cross-correlated to develop a peak value representing a shift position between the images. A curve is fitted to the peak and neighboring values to find the nearest integer pixel shift position. The test block is shifted in the test image by the integer pixel shift position, and the FFT in the test image is repeated and correlated with the FFT from the reference image. The curve fitting is repeated to obtain a fractional pixel shift position value that is combined with the integer pixel shift value to update the test block position again in the test image. The steps are repeated until an end condition is achieved, at which point the value of the pixel shift position for the test block in the test image relative to the reference image is used to align the two images with high precision sub-pixel accuracy.

31 citations

Patent
01 Oct 2004
TL;DR: In this article, a method of matching image color and or luminance characteristics in an image processing system is presented. But the method is limited to two images, and it requires the user to identify a highlight, shadow or overall region in both images.
Abstract: A method of matching image color and or luminance characteristics in an image processing system. In order to match an input image with a reference image, a color transformation M is initialised ( 601 ). An output image is copied ( 602 ) from the input image. The following sequence of operations is then repeated: Output and reference images are displayed on a system monitor. The user identifies ( 603 ) a highlight, shadow or overall region in both images. These regions are processed ( 604 ) to identify a difference ( 605 ). The difference is concatenated ( 606 ) onto transformation M. The output image is updated ( 607 ) by processing the input with M.

31 citations

Journal ArticleDOI
TL;DR: To develop and optimize a new modification of GRAPPA (generalized autocalibrating partially parallel acquisitions) MR reconstruction algorithm named “RobustGRAPPA.”
Abstract: IN PARALLEL IMAGING, k-space data are acquired simultaneously from multiple coils to improve spatial resolution or temporal sampling (1,2). One can speed data acquisition by acquiring only a fraction of the k-space lines. With advanced parallel imaging reconstruction techniques (1–5), alias-free images can be reconstructed from incomplete k-space data. The generalized autocalibrating partially parallel acquisitions (GRAPPA) (3) is one such reconstruction algorithm. It is a technique that reconstructs the data in the frequency domain. The coil coefficients, or coil weights in Ref. (3) (hereafter called “coil coefficients” to avoid possible confusion with the “weights” in robust fitting) contain the coil sensitivity information. The basic idea in GRAPPA is to first calculate the coil coefficients by solving the linear equations constructed from the acquired ACS (auto calibration signal) lines, and then use these coil coefficients to reconstruct the missing k-space lines. Using a least-squares method to solve the overdetermined linear equations, GRAPPA considers every data point in the calibration region equally in the matrix inversion. The innovation in this study consists of applying robust estimation techniques that discount outliers in the estimation of coil coefficients. With improved estimation of coil coefficients the quality of the reconstructed image can be much improved. We implemented and tested a slow, iterative robust estimation technique as well as a fast ad hoc technique. In this and other MR reconstruction studies one can quickly generate thousands of images with only a few independent variables and test image datasets. Our laboratory has developed a perceptual difference model (PDM) suitable for objective, quantitative evaluation of image quality (6–9). In the next section we review regular GRAPPA and describe the Robust GRAPPA techniques. We then describe results of computer experiments where we have evaluated the new reconstruction algorithms with PDM as a function of total reduction factor, outlier ratio, center filling options, noise, and datasets. The Fast and Slow methods are compared to regular GRAPPA. The PDM methodology, including a comparison to human observers, is presented in the Appendix.

31 citations

Journal ArticleDOI
TL;DR: An image matching method based on affine transformation of local image areas is proposed that provides significant improvement in robustness for different viewpoint images matching in the 2D scene and 3D scene.
Abstract: In recent years, many methods have been put forward to improve the image matching for different viewpoint images. However, these methods are still not able to achieve stable results, especially when large variation in view occurs. In this paper, an image matching method based on affine transformation of local image areas is proposed. First, local stable regions are extracted from the reference image and the test image, and transformed to circular areas according to the second-order moment. Then, scale invariant features are detected and matched in the transformed regions. Finally, we use epipolar constraint based on the fundamental matrix to eliminate wrong corresponding pairs. The goal of our method is not to increase the invariance of the detector but to improve the final performance of the matching results. The experimental results demonstrate that compared with the traditional detectors the proposed method provides significant improvement in robustness for different viewpoint images matching in the 2D scene and 3D scene. Moreover, the efficiency is greatly improved compared with affine scale invariant feature transform (Affine-SIFT).

31 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
91% related
Image segmentation
79.6K papers, 1.8M citations
91% related
Image processing
229.9K papers, 3.5M citations
90% related
Convolutional neural network
74.7K papers, 2M citations
90% related
Support vector machine
73.6K papers, 1.7M citations
90% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20231
20228
2021130
2020232
2019321
2018293