scispace - formally typeset
Search or ask a question

Showing papers on "Image gradient published in 2015"


Posted Content
TL;DR: In this paper, a multi-scale architecture, an adversarial training method, and an image gradient difference loss function were proposed to predict future frames from a video sequence. But their performance was not as good as those of the previous works.
Abstract: Learning to predict future images from a video sequence involves the construction of an internal representation that models the image evolution accurately, and therefore, to some degree, its content and dynamics. This is why pixel-space video prediction may be viewed as a promising avenue for unsupervised feature learning. In addition, while optical flow has been a very studied problem in computer vision for a long time, future frame prediction is rarely approached. Still, many vision applications could benefit from the knowledge of the next frames of videos, that does not require the complexity of tracking every pixel trajectories. In this work, we train a convolutional network to generate future frames given an input sequence. To deal with the inherently blurry predictions obtained from the standard Mean Squared Error (MSE) loss function, we propose three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function. We compare our predictions to different published results based on recurrent neural networks on the UCF101 dataset

1,175 citations


Posted Content
TL;DR: Domain Transform (DT) as mentioned in this paper replaces the fully-connected CRF with domain transform (DT), a modern edge-preserving filtering method in which the amount of smoothing is controlled by a reference edge map.
Abstract: Deep convolutional neural networks (CNNs) are the backbone of state-of-art semantic image segmentation systems. Recent work has shown that complementing CNNs with fully-connected conditional random fields (CRFs) can significantly enhance their object localization accuracy, yet dense CRF inference is computationally expensive. We propose replacing the fully-connected CRF with domain transform (DT), a modern edge-preserving filtering method in which the amount of smoothing is controlled by a reference edge map. Domain transform filtering is several times faster than dense CRF inference and we show that it yields comparable semantic segmentation results, accurately capturing object boundaries. Importantly, our formulation allows learning the reference edge map from intermediate CNN features instead of using the image gradient magnitude as in standard DT filtering. This produces task-specific edges in an end-to-end trainable system optimizing the target semantic segmentation quality.

219 citations


Proceedings ArticleDOI
10 Dec 2015
TL;DR: A novel united low-light image enhancement framework for both contrast enhancement and denoising is proposed and outperforms traditional methods in both subjective and objective assessments.
Abstract: In this paper, a novel united low-light image enhancement framework for both contrast enhancement and denoising is proposed. First, the low-light image is segmented into superpixels, and the ratio between the local standard deviation and the local gradients is utilized to estimate the noise-texture level of each superpixel. Then the image is inverted to be processed in the following steps. Based on the noise-texture level, a smooth base layer is adaptively extracted by the BM3D filter, and another detail layer is extracted by the first order differential of the inverted image and smoothed with the structural filter. These two layers are adaptively combined to get a noise-free and detail-preserved image. At last, an adaptive enhancement parameter is adopt into the dark channel prior dehazing process to enlarge contrast and prevent over/under enhancement. Experimental results demonstrate that our proposed method outperforms traditional methods in both subjective and objective assessments.

169 citations


Journal ArticleDOI
TL;DR: The experimental results demonstrate that the proposed approach can generate superior HR images with better visual quality, lower reconstruction error, and acceptable computation efficiency as compared with state-of-the-art works.
Abstract: Single image superresolution is a classic and active image processing problem, which aims to generate a high-resolution (HR) image from a low-resolution input image. Due to the severely under-determined nature of this problem, an effective image prior is necessary to make the problem solvable, and to improve the quality of generated images. In this paper, a novel image superresolution algorithm is proposed based on gradient profile sharpness (GPS). GPS is an edge sharpness metric, which is extracted from two gradient description models, i.e., a triangle model and a Gaussian mixture model for the description of different kinds of gradient profiles. Then, the transformation relationship of GPSs in different image resolutions is studied statistically, and the parameter of the relationship is estimated automatically. Based on the estimated GPS transformation relationship, two gradient profile transformation models are proposed for two profile description models, which can keep profile shape and profile gradient magnitude sum consistent during profile transformation. Finally, the target gradient field of HR image is generated from the transformed gradient profiles, which is added as the image prior in HR image reconstruction model. Extensive experiments are conducted to evaluate the proposed algorithm in subjective visual effect, objective quality, and computation time. The experimental results demonstrate that the proposed approach can generate superior HR images with better visual quality, lower reconstruction error, and acceptable computation efficiency as compared with state-of-the-art works.

127 citations


Proceedings ArticleDOI
10 Dec 2015
TL;DR: Experimental results on enhancing such images in different lighting conditions demonstrate the proposed method performs better than other IFM-based enhancement methods.
Abstract: In this paper, we propose to use image blurriness to estimate the depth map for underwater image enhancement. It is based on the observation that objects farther from the camera are more blurry for underwater images. Adopting image blurriness with the image formation model (IFM), we can estimate the distance between scene points and the camera and thereby recover and enhance underwater images. Experimental results on enhancing such images in different lighting conditions demonstrate the proposed method performs better than other IFM-based enhancement methods.

117 citations


Journal ArticleDOI
TL;DR: A robust coupled dictionary learning method with locality coordinate constraints is introduced to reconstruct the corresponding high resolution depth map and incorporates an adaptively regularized shock filter to simultaneously reduce the jagged noise and sharpen the edges.
Abstract: This paper describes a new algorithm for depth image super resolution and denoising using a single depth image as input. A robust coupled dictionary learning method with locality coordinate constraints is introduced to reconstruct the corresponding high resolution depth map. The local constraints effectively reduce the prediction uncertainty and prevent the dictionary from over-fitting. We also incorporate an adaptively regularized shock filter to simultaneously reduce the jagged noise and sharpen the edges. Furthermore, a joint reconstruction and smoothing framework is proposed with an L0 gradient smooth constraint, making the reconstruction more robust to noise. Experimental results demonstrate the effectiveness of our proposed algorithm compared to previously reported methods.

103 citations


Proceedings ArticleDOI
30 Apr 2015
TL;DR: Comparisons of Robert, Prewitt, Sobel operators based edge detection techniques for real time uses of gray scale image are presented.
Abstract: Image processing has applications in real time embedded systems. Real time image processing requires processing on large data of image pixels in a stipulated time. Reconfigurable device such as FPGAs can be program to process on large image data and required processing time on image can be reduced by deploying parallelism, pipelining techniques in algorithm. Edge detection is very basic tool used in many image processing. Robert, Prewitt, Sobel edge detection are gradient based edge detection methods used to find edge pixels in an image. This paper presents comparisons of Robert, Prewitt, Sobel operators based edge detection techniques for real time uses. Edge detection algorithms are written with the help of hardware descriptive language VHDL. Xilinx ISE Design Suite-13 and MATLAB software platforms are used for simulation purpose. This paper focus on edge detection of gray scale image.

97 citations


Proceedings ArticleDOI
07 Dec 2015
TL;DR: This paper proposes the concept of mutual-structure, which refers to the structural information that is contained in both images and thus can be safely enhanced by joint filtering, and an untraditional objective function that can be efficiently optimized to yield mutual structure.
Abstract: Previous joint/guided filters directly transfer the structural information in the reference image to the target one. In this paper, we first analyze its major drawback -- that is, there may be completely different edges in the two images. Simply passing all patterns to the target could introduce significant errors. To address this issue, we propose the concept of mutual-structure, which refers to the structural information that is contained in both images and thus can be safely enhanced by joint filtering, and an untraditional objective function that can be efficiently optimized to yield mutual structure. Our method results in necessary and important edge preserving, which greatly benefits depth completion, optical flow estimation, image enhancement, stereo matching, to name a few.

93 citations


Journal ArticleDOI
TL;DR: This paper reinterpret the gradient thresholding model as variational models with sparsity constraints as well as defining the unifying Retinex model in two similar, but more general, steps.
Abstract: In this paper, we provide a short review of Retinex and then present a unifying framework. The fundamental assumption of all Retinex models is that the observed image is a multiplication between the illumination and the true underlying reflectance of the object. Starting from Morel's 2010 PDE model, where illumination is supposed to vary smoothly and where the reflectance is thus recovered from a hard-thresholded Laplacian of the observed image in a Poisson equation, we define our unifying Retinex model in two similar, but more general, steps. We reinterpret the gradient thresholding model as variational models with sparsity constraints. First, we look for a filtered gradient that is the solution of an optimization problem consisting of two terms: a sparsity prior of the reflectance and a fidelity prior of the reflectance gradient to the observed image gradient. Second, since this filtered gradient almost certainly is not a consistent image gradient, we then fit an actual reflectance gradient to it, subje...

86 citations


Journal ArticleDOI
TL;DR: The proposed EDBTC is not only examined with good capability for image compression but also offers an effective way to index images for the content-based image retrieval system.
Abstract: This paper presents a new approach to index color images using the features extracted from the error diffusion block truncation coding (EDBTC). The EDBTC produces two color quantizers and a bitmap image, which are further processed using vector quantization (VQ) to generate the image feature descriptor. Herein two features are introduced, namely, color histogram feature (CHF) and bit pattern histogram feature (BHF), to measure the similarity between a query image and the target image in database. The CHF and BHF are computed from the VQ-indexed color quantizer and VQ-indexed bitmap image, respectively. The distance computed from CHF and BHF can be utilized to measure the similarity between two images. As documented in the experimental result, the proposed indexing method outperforms the former block truncation coding based image indexing and the other existing image retrieval schemes with natural and textural data sets. Thus, the proposed EDBTC is not only examined with good capability for image compression but also offers an effective way to index images for the content-based image retrieval system.

83 citations


Proceedings ArticleDOI
Xiaohu Lu1, Jian Yao1, Kai Li1, Li Li1
10 Dec 2015
TL;DR: Experimental results illustrate that the proposed line segment detector, named as CannyLines, can extract more meaningful line segments than two popularly used line segment detectors, LSD and ED-L lines, especially on the man-made scenes.
Abstract: In this paper, we present a robust line segment detection algorithm to efficiently detect the line segments from an input image. Firstly a parameter-free Canny operator, named as CannyPF, is proposed to robustly extract the edge map from an input image by adaptively setting the low and high thresholds for the traditional Canny operator. Secondly, both efficient edge linking and splitting techniques are proposed to collect collinear point clusters directly from the edge map, which are used to fit the initial line segments based on the least-square fitting method. Thirdly, longer and more complete line segments are produced via efficient extending and merging. Finally, all the detected line segments are validated due to the Helmholtz principle [1, 2] in which both the gradient orientation and magnitude information are considered. Experimental results on a set of representative images illustrate that our proposed line segment detector, named as CannyLines, can extract more meaningful line segments than two popularly used line segment detectors, LSD [3] and ED-Lines [4], especially on the man-made scenes.

Proceedings ArticleDOI
01 Sep 2015
TL;DR: The novel patch decomposition allows us to handle RGB color channels jointly and thus produces fused images with more vivid color appearances and the superiority of the proposed algorithm both qualitatively and quantitatively is demonstrated.
Abstract: We propose a patch-wise approach for multi-exposure image fusion (MEF). A key step in our approach is to decompose each color image patch into three conceptually independent components: signal strength, signal structure and mean intensity. Upon processing the three components separately based on patch strength and exposedness measures, we uniquely reconstruct a color image patch and place it back into the fused image. Unlike most pixel-wise MEF methods in the literature, the proposed algorithm does not require significant pre/postprocessing steps to improve visual quality or to reduce spatial artifacts. Moreover, the novel patch decomposition allows us to handle RGB color channels jointly and thus produces fused images with more vivid color appearances. Extensive experiments demonstrate the superiority of the proposed algorithm both qualitatively and quantitatively.

Journal ArticleDOI
TL;DR: It is demonstrated that QSobel can extract edges in the computational complexity of O(n2) for a FRQI quantum image with a size of 2n × 2n, which would resolve the real-time problem of image edge extraction.
Abstract: Edge extraction is an indispensable task in digital image processing. With the sharp increase in the image data, real-time problem has become a limitation of the state of the art of edge extraction algorithms. In this paper, QSobel, a novel quantum image edge extraction algorithm is designed based on the flexible representation of quantum image (FRQI) and the famous edge extraction algorithm Sobel. Because FRQI utilizes the superposition state of qubit sequence to store all the pixels of an image, QSobel can calculate the Sobel gradients of the image intensity of all the pixels simultaneously. It is the main reason that QSobel can extract edges quite fast. Through designing and analyzing the quantum circuit of QSobel, we demonstrate that QSobel can extract edges in the computational complexity of O ( n 2) for a FRQI quantum image with a size of 2 n × 2 n . Compared with all the classical edge extraction algorithms and the existing quantum edge extraction algorithms, QSobel can utilize quantum parallel computation to reach a significant and exponential speedup. Hence, QSobel would resolve the real-time problem of image edge extraction.

Journal ArticleDOI
TL;DR: This work proposes a variational model for image reconstruction that employs a regularization functional adapted to the local geometry of image by means of its structure tensor and extends naturally to nonlocal regularization, where it exploits the local self-similarity of natural images to improve nonlocal TV and diffusion operators.
Abstract: Natural images exhibit geometric structures that are informative of the properties of the underlying scene. Modern image processing algorithms respect such characteristics by employing regularizers that capture the statistics of natural images. For instance, total variation (TV) respects the highly kurtotic distribution of the pointwise gradient by allowing for large magnitude outlayers. However, the gradient magnitude alone does not capture the directionality and scale of local structures in natural images. The structure tensor provides a more meaningful description of gradient information as it describes both the size and orientation of the image gradients in a neighborhood of each point. Based on this observation, we propose a variational model for image reconstruction that employs a regularization functional adapted to the local geometry of image by means of its structure tensor. Our method alternates two minimization steps: 1) robust estimation of the structure tensor as a semidefinite program and 2) reconstruction of the image with an adaptive regularizer defined from this tensor. This two-step procedure allows us to extend anisotropic diffusion into the convex setting and develop robust, efficient, and easy-to-code algorithms for image denoising, deblurring, and compressed sensing. Our method extends naturally to nonlocal regularization, where it exploits the local self-similarity of natural images to improve nonlocal TV and diffusion operators. Our experiments show a consistent accuracy improvement over classic regularization.

Proceedings ArticleDOI
21 Aug 2015
TL;DR: This paper adopts the method of combining global with local edge detection to extract edge from improved Canny operator, which can extract image edge effectively, and have the powerful anti-noise ability.
Abstract: For single edge detection methods causing important and weak gradient change edge missing problems, this paper adopts the method of combining global with local edge detection to extract edge The global edge detection can obtain the whole edge, which uses adaptive smooth filter algorithm based on Canny operator Compared with effect of edge detection from the Canny operator and Sobel operator, the edge from improved Canny operator is the most complete and rich, do not contain false edge To the whole detection failed to get the edge, the paper selects local area detection method for edge extraction Local edge detection which uses distance weighted average method based on k-average method can overcome the impact of outliers on clustering effectively Complete skull image edge is got through edge detection method that combines global with local Compared with the Canny edge detection method, this algorithm can extract image edge effectively, and have the powerful anti-noise ability

Proceedings ArticleDOI
07 Jun 2015
TL;DR: This work partitions images into convex polygons by building a Voronoi diagram that conforms to preliminarily detected line-segments, before homogenizing the partition by spatial point process distributed over the image gradient.
Abstract: The over-segmentation of images into atomic regions has become a standard and powerful tool in Vision. Traditional superpixel methods, that operate at the pixel level, cannot directly capture the geometric information disseminated into the images. We propose an alternative to these methods by operating at the level of geometric shapes. Our algorithm partitions images into convex polygons. It presents several interesting properties in terms of geometric guarantees, region compactness and scalability. The overall strategy consists in building a Voronoi diagram that conforms to preliminarily detected line-segments, before homogenizing the partition by spatial point process distributed over the image gradient. Our method is particularly adapted to images with strong geometric signatures, typically man-made objects and environments. We show the potential of our approach with experiments on large-scale images and comparisons with state-of-the-art superpixel methods.

Journal ArticleDOI
TL;DR: A novel SR method is proposed by exploiting both the directional group sparsity of the image gradients and the directional features in similarity weight estimation to achieve higher quality SR reconstruction than the state-of-the-art algorithms.
Abstract: Single image superresolution (SR) aims to construct a high-resolution version from a single low-resolution (LR) image. The SR reconstruction is challenging because of the missing details in the given LR image. Thus, it is critical to explore and exploit effective prior knowledge for boosting the reconstruction performance. In this paper, we propose a novel SR method by exploiting both the directional group sparsity of the image gradients and the directional features in similarity weight estimation. The proposed SR approach is based on two observations: 1) most of the sharp edges are oriented in a limited number of directions and 2) an image pixel can be estimated by the weighted averaging of its neighbors. In consideration of these observations, we apply the curvelet transform to extract directional features which are then used for region selection and weight estimation. A combined total variation regularizer is presented which assumes that the gradients in natural images have a straightforward group sparsity structure. In addition, a directional nonlocal means regularization term takes pixel values and directional information into account to suppress unwanted artifacts. By assembling the designed regularization terms, we solve the SR problem of an energy function with minimal reconstruction error by applying a framework of templates for first-order conic solvers. The thorough quantitative and qualitative results in terms of peak signal-to-noise ratio, structural similarity, information fidelity criterion, and preference matrix demonstrate that the proposed approach achieves higher quality SR reconstruction than the state-of-the-art algorithms.

Journal ArticleDOI
TL;DR: A novel imaging system that can simultaneously capture the red, green, blue (RGB) and the NIR images with different exposure times and reconstruct a latent color image sequence using an adaptive smoothness condition based on gradient and color correlations is proposed.
Abstract: We propose a novel method to synthesize a noise- and blur-free color image sequence using near-infrared (NIR) images captured in extremely low light conditions. In extremely low light scenes, heavy noise and motion blur are simultaneously produced in the captured images. Our goal is to enhance the color image sequence of an extremely low light scene. In this paper, we augment the imaging system as well as enhancing the image synthesis scheme. We propose a novel imaging system that can simultaneously capture the red, green, blue (RGB) and the NIR images with different exposure times. An RGB image is taken with a long exposure time to acquire sufficient color information and mitigates the effects of heavy noise. By contrast, the NIR images are captured with a short exposure time to measure the structure of the scenes. Our imaging system using different exposure times allows us to ensure sufficient information to reconstruct a clear color image sequence. Using the captured image pairs, we reconstruct a latent color image sequence using an adaptive smoothness condition based on gradient and color correlations. Our experiments using both synthetic images and real image sequences show that our method outperforms other state-of-the-art methods.

Journal ArticleDOI
TL;DR: This work proposes a two-image restoration framework considering input images from different fields, for example, one noisy color image and one dark-flashed near-infrared image, and introduces a novel scale map as a competent representation to explicitly model derivative-level confidence.
Abstract: Color, infrared and flash images captured in different fields can be employed to effectively eliminate noise and other visual artifacts. We propose a two-image restoration framework considering input images from different fields, for example, one noisy color image and one dark-flashed near-infrared image. The major issue in such a framework is to handle all structure divergence and find commonly usable edges and smooth transitions for visually plausible image reconstruction. We introduce a novel scale map as a competent representation to explicitly model derivative-level confidence and propose new functions and a numerical solver to effectively infer it following our important structural observations. Multispectral shadow detection is also used to make our system more robust. Our method is general and shows a principled way to solve multispectral restoration problems.

Proceedings ArticleDOI
07 Dec 2015
TL;DR: A novel double weighted average image filter (SGF) based on the segment graph, which enables the filter to smooth out high-contrast details and textures while preserving major image structures very well and has an O(N) time complexity for both gray-scale and high dimensional images.
Abstract: In this paper, we design a new edge-aware structure, named segment graph, to represent the image and we further develop a novel double weighted average image filter (SGF) based on the segment graph. In our SGF, we use the tree distance on the segment graph to define the internal weight function of the filtering kernel, which enables the filter to smooth out high-contrast details and textures while preserving major image structures very well. While for the external weight function, we introduce a user specified smoothing window to balance the smoothing effects from each node of the segment graph. Moreover, we also set a threshold to adjust the edge-preserving performance. These advantages make the SGF more flexible in various applications and overcome the "halo" and "leak" problems appearing in most of the state-of-the-art approaches. Finally and importantly, we develop a linear algorithm for the implementation of our SGF, which has an O(N) time complexity for both gray-scale and high dimensional images, regardless of the kernel size and the intensity range. Typically, as one of the fastest edge-preserving filters, our CPU implementation achieves 0.15s per megapixel when performing filtering for 3-channel color images. The strength of the proposed filter is demonstrated by various applications, including stereo matching, optical flow, joint depth map upsampling, edge-preserving smoothing, edges detection, image abstraction and texture editing.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed algorithm reduces color distortion in the detail-enhanced image, especially around sharp edges, which is better than an existing L0 norm based algorithm.
Abstract: Detail enhancement is required by many problems in the fields of image processing and computational photography. Existing detail enhancement algorithms first decompose a source image into a base layer and a detail layer via an edge-preserving smoothing algorithm, and then amplify the detail layer to produce a detail-enhanced image. In this letter, we propose a new L 0 norm based detail enhancement algorithm which generates the detail-enhanced image directly. The proposed algorithm preserves sharp edges better than an existing L 0 norm based algorithm. Experimental results show that the proposed algorithm reduces color distortion in the detail-enhanced image, especially around sharp edges.

Journal ArticleDOI
TL;DR: A family of nonlocal energy functionals that involves the standard image gradient is introduced that employs as their regularization operator a novel nonlocal version of the structure tensor and is able to provide a robust measure of image variation.
Abstract: We present a nonlocal regularization framework that we apply to inverse imaging problems. As opposed to existing nonlocal regularization methods that rely on the graph gradient as the regularization operator, we introduce a family of nonlocal energy functionals that involves the standard image gradient. Our motivation for designing these functionals is to exploit at the same time two important properties inherent in natural images, namely the local structural image regularity and the nonlocal image self-similarity. To this end, our regularizers employ as their regularization operator a novel nonlocal version of the structure tensor. This operator performs a nonlocal weighted average of the image gradients computed at every image location and, thus, is able to provide a robust measure of image variation. Furthermore, we show a connection of the proposed regularizers to the total variation semi-norm and prove convexity. The convexity property allows us to employ powerful tools from convex optimization to design an efficient minimization algorithm. Our algorithm is based on a splitting variable strategy, which leads to an augmented Lagrangian formulation. To solve the corresponding optimization problem, we employ the alternating-direction methods of multipliers. Finally, we present extensive experiments on several inverse imaging problems, where we compare our regularizers with other competing local and nonlocal regularization approaches. Our results are shown to be systematically superior, both quantitatively and visually.

Journal ArticleDOI
TL;DR: Results show that the proposed approach obtains the best average scores in both data sets and evaluation metrics and is also the most robust to failures.
Abstract: Image mosaicking applications require both geometrical and photometrical registrations between the images that compose the mosaic. This paper proposes a probabilistic color correction algorithm for correcting the photometrical disparities. First, the image to be color corrected is segmented into several regions using mean shift. Then, connected regions are extracted using a region fusion algorithm. Local joint image histograms of each region are modeled as collections of truncated Gaussians using a maximum likelihood estimation procedure. Then, local color palette mapping functions are computed using these sets of Gaussians. The color correction is performed by applying those functions to all the regions of the image. An extensive comparison with ten other state of the art color correction algorithms is presented, using two different image pair data sets. Results show that the proposed approach obtains the best average scores in both data sets and evaluation metrics and is also the most robust to failures.

Journal ArticleDOI
TL;DR: A new approach to derive the image feature descriptor from the dot-diffused block truncation coding (DDBTC) compressed data stream is presented, and the proposed scheme can be considered as an effective candidate for real-time image retrieval applications.
Abstract: This paper presents a new approach to derive the image feature descriptor from the dot-diffused block truncation coding (DDBTC) compressed data stream. The image feature descriptor is simply constructed from two DDBTC representative color quantizers and its corresponding bitmap image. The color histogram feature (CHF) derived from two color quantizers represents the color distribution and image contrast, while the bit pattern feature (BPF) constructed from the bitmap image characterizes the image edges and textural information. The similarity between two images can be easily measured from their CHF and BPF values using a specific distance metric computation. Experimental results demonstrate the superiority of the proposed feature descriptor compared to the former existing schemes in image retrieval task under natural and textural images. The DDBTC method compresses an image efficiently, and at the same time, its corresponding compressed data stream can provide an effective feature descriptor for performing image retrieval and classification. Consequently, the proposed scheme can be considered as an effective candidate for real-time image retrieval applications.

Journal ArticleDOI
TL;DR: Performed controlled experiments showed that LoG edge detection algorithm is better than other edge detection algorithms in determining texture analysis.

Journal ArticleDOI
TL;DR: The proposed blind image quality evaluator requires absolutely no training with the distorted image, pristine images, or subjective human scores to predict the perceptual quality but uses the intrinsic global change of the query image across scales.
Abstract: A new approach to blind image quality assessment (BIQA), requiring no training, is proposed in this paper. The approach is named as blind image quality evaluator based on scales and works by evaluating the global difference of the query image analyzed at different scales with the query image at original resolution. The approach is based on the ability of the natural images to exhibit redundant information over various scales. A distorted image is considered as a deviation from the natural image and bereft of the redundancy present in the original image. The similarity of the original resolution image with its down-scaled version will decrease more when the image is distorted more. Therefore, the dissimilarities of an image with its low-resolution versions are cumulated in the proposed method. We dissolve the query image into its scale-space and measure the global dissimilarity with the co-occurrence histograms of the original and its scaled images. These scaled images are the low pass versions of the original image. The dissimilarity, called low pass error, is calculated by comparing the low pass versions across scales with the original image. The high pass versions of the image in different scales are obtained by Wavelet decomposition and their dissimilarity from the original image is also calculated. This dissimilarity, called high pass error, is computed with the variance and gradient histograms and weighted by the contrast sensitivity function to make it perceptually effective. These two kinds of dissimilarities are combined together to derive the quality score of the query image. This method requires absolutely no training with the distorted image, pristine images, or subjective human scores to predict the perceptual quality but uses the intrinsic global change of the query image across scales. The performance of the proposed method is evaluated across six publicly available databases and found to be competitive with the state-of-the-art techniques.

Patent
16 Jul 2015
TL;DR: In this paper, a software suite for optimizing the de-warping of wide angle lens images includes a calibration process utilizing a calibration circle to prepare raw image data, which is then used to map a dewarped image space for processed image data.
Abstract: A software suite for optimizing the de-warping of wide angle lens images includes a calibration process utilizing a calibration circle to prepare raw image data. The calibration circle is used to map the raw image data about a warped image space, which is then used to map a de-warped image space for processed image data. The processed image data is generated from the raw image data by copying color values from warped pixel coordinates of the warped image space to de-warped pixel coordinates of the de-warped image space. The processed image data is displayed as a single perspective image and a panoramic image in a click-to-position virtual mapping interface alongside the raw image data. A user can make an area of interest selection by clicking the raw image data, the single perspective image, or the panoramic image in order to change the point of focus within the single perspective image.

Journal ArticleDOI
Wei Yu1, Li Zeng1
09 Jul 2015-PLOS ONE
TL;DR: An image reconstruction algorithm based on ℓ 0 gradient minimization for limited-angle CT is developed and it is indicated that the developed algorithm outperforms classical reconstruction algorithms in suppressing the streak artifacts and the gradual changed artifacts nearby edges simultaneously.
Abstract: In medical and industrial applications of computed tomography (CT) imaging, limited by the scanning environment and the risk of excessive X-ray radiation exposure imposed to the patients, reconstructing high quality CT images from limited projection data has become a hot topic X-ray imaging in limited scanning angular range is an effective imaging modality to reduce the radiation dose to the patients As the projection data available in this modality are incomplete, limited-angle CT image reconstruction is actually an ill-posed inverse problem To solve the problem, image reconstructed by conventional filtered back projection (FBP) algorithm frequently results in conspicuous streak artifacts and gradual changed artifacts nearby edges Image reconstruction based on total variation minimization (TVM) can significantly reduce streak artifacts in few-view CT, but it suffers from the gradual changed artifacts nearby edges in limited-angle CT To suppress this kind of artifacts, we develop an image reconstruction algorithm based on l0 gradient minimization for limited-angle CT in this paper The l0-norm of the image gradient is taken as the regularization function in the framework of developed reconstruction model We transformed the optimization problem into a few optimization sub-problems and then, solved these sub-problems in the manner of alternating iteration Numerical experiments are performed to validate the efficiency and the feasibility of the developed algorithm From the statistical analysis results of the performance evaluations peak signal-to-noise ratio (PSNR) and normalized root mean square distance (NRMSD), it shows that there are significant statistical differences between different algorithms from different scanning angular ranges (p<00001) From the experimental results, it also indicates that the developed algorithm outperforms classical reconstruction algorithms in suppressing the streak artifacts and the gradual changed artifacts nearby edges simultaneously

Journal ArticleDOI
TL;DR: In this article, the standard edge detection methods which are widely used in image processing such as Prewitt, Laplacian of Gaussian, Canny, Sobel, Robert and also the new approach are discussed in this known as Fuzzy logic.
Abstract: The first step in an image recognition system is the edges sensibility in a digital image. Edge detection for object observation in image processing is the important part. This will give us a good understanding of edge detection algorithms. An edge is useful because it marks the boundaries and divides of plane, object or appearance from other places things. For pattern recognition it is also an intermediate step in the digital images. An edge consists of pixels with the intensity variations of gray tones which are different from their neighbour pixels. This paper introduces the standard edge detection methods which are widely used in image processing such as Prewitt, Laplacian of Gaussian, Canny, Sobel, Robert and also the new approach are discussed in this known as Fuzzy logic.

Proceedings ArticleDOI
21 Mar 2015
TL;DR: An attempt is made to study the performance of most commonly used edge detection techniques for image segmentation and the comparison of these techniques are compared with one another so as to choose the best technique for edge detection segment image.
Abstract: Edge detection is a type of image segmentation techniques which determines the presence of an edge or line in an image and outlines them in an appropriate way. The main purpose of edge detection is to simplify the image data in order to minimize the amount of data to be processed. Generally, an edge is defined as the boundary pixels that connect two separate regions with changing image amplitude attributes such as different constant luminance and tristimulus values in an image. In this paper, we present methods for edge segmentation of images; we used five techniques for this category; Sobel operator technique, Prewitt technique, Laplacian technique, Canny technique, Roberts technique, and they are compared with one another so as to choose the best technique for edge detection segment image. These techniques applied on one image to choose base guesses for segmentation or edge detection image. In this paper an attempt is made to study the performance of most commonly used edge detection techniques for image segmentation and also the comparison of these techniques is carried out with an experiment by using MATLAB software. We will use the edges to find congruence between objects.