scispace - formally typeset
Search or ask a question

Showing papers on "Pixel published in 2012"


Journal ArticleDOI
TL;DR: The goal is development of a cloud and cloud shadow detection algorithm suitable for routine usage with Landsat images and as high as 96.4%.

1,620 citations


Proceedings Article
03 Dec 2012
TL;DR: This work addresses a central problem of neuroanatomy, namely, the automatic segmentation of neuronal structures depicted in stacks of electron microscopy images, using a special type of deep artificial neural network as a pixel classifier to segment biological neuron membranes.
Abstract: We address a central problem of neuroanatomy, namely, the automatic segmentation of neuronal structures depicted in stacks of electron microscopy (EM) images. This is necessary to efficiently map 3D brain structure and connectivity. To segment biological neuron membranes, we use a special type of deep artificial neural network as a pixel classifier. The label of each pixel (membrane or non-membrane) is predicted from raw pixel values in a square window centered on it. The input layer maps each window pixel to a neuron. It is followed by a succession of convolutional and max-pooling layers which preserve 2D information and extract features with increasing levels of abstraction. The output layer produces a calibrated probability for each class. The classifier is trained by plain gradient descent on a 512 × 512 × 30 stack with known ground truth, and tested on a stack of the same size (ground truth unknown to the authors) by the organizers of the ISBI 2012 EM Segmentation Challenge. Even without problem-specific postprocessing, our approach outperforms competing techniques by a large margin in all three considered metrics, i.e. rand error, warping error and pixel error. For pixel error, our approach is the only one outperforming a second human observer.

1,359 citations


Journal ArticleDOI
TL;DR: It is demonstrated with a change blindness data set that the distance between images induced by the image signature is closer to human perceptual distance than can be achieved using other saliency algorithms, pixel-wise, or GIST descriptor methods.
Abstract: We introduce a simple image descriptor referred to as the image signature. We show, within the theoretical framework of sparse signal mixing, that this quantity spatially approximates the foreground of an image. We experimentally investigate whether this approximate foreground overlaps with visually conspicuous image locations by developing a saliency algorithm based on the image signature. This saliency algorithm predicts human fixation points best among competitors on the Bruce and Tsotsos [1] benchmark data set and does so in much shorter running time. In a related experiment, we demonstrate with a change blindness data set that the distance between images induced by the image signature is closer to human perceptual distance than can be achieved using other saliency algorithms, pixel-wise, or GIST [2] descriptor methods.

929 citations


Journal ArticleDOI
TL;DR: In this paper, pixel-based and object-based image analysis approaches for classifying broad land cover classes over agricultural landscapes are compared using three supervised machine learning algorithms: decision tree (DT), random forest (RF), and the support vector machine (SVM).

785 citations


Journal ArticleDOI
TL;DR: This article is a comprehensive exploration of all of the major unmixing approaches and their applications and concludes that no single approach is optimal and applicable to all cases.
Abstract: Satellite imagery is formed by finite digital numbers representing a specific location of ground surface in which each matrix element is denominated as a picture element or pixel. The pixels represent the sensor measurements of spectral radiance. The radiance recorded in the satellite images is then an integrated sum of the radiances of all targets within the instantaneous field of view IFOV of the sensors. Therefore, the radiation detected is caused by a mixture of several different materials within the image pixels. For this reason, spectral unmixing has been used as a technique for analysing the mixture of components in remotely sensed images for almost 30 years. Different spectral unmixing approaches have been described in the literature. In recent years, many authors have proposed more complex models that permit obtaining a higher accuracy and use less computing time. Although the most widely used method consists of employing a single set of endmembers typically three or four on the whole image and using a constrained least squares method to perform the unmixing linearly, every different algorithm has its own merits and no single approach is optimal and applicable to all cases. Additionally, the number of applications using unmixing techniques is increasing. Spectral unmixing techniques are used mainly for providing information to monitor different natural resources agricultural, forest, geological, etc. and environmental problems erosion, deforestation, plagues and disease, forest fires, etc.. This article is a comprehensive exploration of all of the major unmixing approaches and their applications.

764 citations


Proceedings ArticleDOI
01 Dec 2012
TL;DR: A new approach to defining additive steganographic distortion in the spatial domain, where the change in the output of directional high-pass filters after changing one pixel is weighted and then aggregated using the reciprocal Hölder norm to define the individual pixel costs.
Abstract: This paper presents a new approach to defining additive steganographic distortion in the spatial domain The change in the output of directional high-pass filters after changing one pixel is weighted and then aggregated using the reciprocal Holder norm to define the individual pixel costs In contrast to other adaptive embedding schemes, the aggregation rule is designed to force the embedding changes to highly textured or noisy regions and to avoid clean edges Consequently, the new embedding scheme appears markedly more resistant to steganalysis using rich models The actual embedding algorithm is realized using syndrome-trellis codes to minimize the expected distortion for a given payload

728 citations


Journal ArticleDOI
TL;DR: In this article, the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) was used to determine a weak gravitational lensing signal from the full 154 deg^2 of deep multicolour data obtained by the CFHT Legacy Survey.
Abstract: We present the Canada–France–Hawaii Telescope Lensing Survey (CFHTLenS) that accurately determines a weak gravitational lensing signal from the full 154 deg^2 of deep multicolour data obtained by the CFHT Legacy Survey. Weak gravitational lensing by large-scale structure is widely recognized as one of the most powerful but technically challenging probes of cosmology. We outline the CFHTLenS analysis pipeline, describing how and why every step of the chain from the raw pixel data to the lensing shear and photometric redshift measurement has been revised and improved compared to previous analyses of a subset of the same data. We present a novel method to identify data which contributes a non-negligible contamination to our sample and quantify the required level of calibration for the survey. Through a series of cosmology-insensitive tests we demonstrate the robustness of the resulting cosmic shear signal, presenting a science-ready shear and photometric redshift catalogue for future exploitation.

704 citations


Journal ArticleDOI
TL;DR: A method for real-time 3D object instance detection that does not require a time-consuming training stage, and can handle untextured objects, and is much faster and more robust with respect to background clutter than current state-of-the-art methods is presented.
Abstract: We present a method for real-time 3D object instance detection that does not require a time-consuming training stage, and can handle untextured objects. At its core, our approach is a novel image representation for template matching designed to be robust to small image transformations. This robustness is based on spread image gradient orientations and allows us to test only a small subset of all possible pixel locations when parsing the image, and to represent a 3D object with a limited set of templates. In addition, we demonstrate that if a dense depth sensor is available we can extend our approach for an even better performance also taking 3D surface normal orientations into account. We show how to take advantage of the architecture of modern computers to build an efficient but very discriminant representation of the input images that can be used to consider thousands of templates in real time. We demonstrate in many experiments on real data that our method is much faster and more robust with respect to background clutter than current state-of-the-art methods.

590 citations


Journal ArticleDOI
TL;DR: This letter adopts a better scheme for measuring the smoothness of blocks, and uses the side-match scheme to further decrease the error rate of extracted-bits in an improved version of Zhang's reversible data hiding method in encrypted images.
Abstract: This letter proposes an improved version of Zhang's reversible data hiding method in encrypted images. The original work partitions an encrypted image into blocks, and each block carries one bit by flipping three LSBs of a set of pre-defined pixels. The data extraction and image recovery can be achieved by examining the block smoothness. Zhang's work did not fully exploit the pixels in calculating the smoothness of each block and did not consider the pixel correlations in the border of neighboring blocks. These two issues could reduce the correctness of data extraction. This letter adopts a better scheme for measuring the smoothness of blocks, and uses the side-match scheme to further decrease the error rate of extracted-bits. The experimental results reveal that the proposed method offers better performance over Zhang's work. For example, when the block size is set to 8 8, the error rate of the Lena image of the proposed method is 0. 34%, which is significantly lower than 1.21% of Zhang's work.

589 citations


Journal ArticleDOI
TL;DR: Thorough experimental results suggest that the proposed SR method can reconstruct higher quality results both quantitatively and perceptually and propose a maximum a posteriori probability framework for SR recovery.
Abstract: Image super-resolution (SR) reconstruction is essentially an ill-posed problem, so it is important to design an effective prior. For this purpose, we propose a novel image SR method by learning both non-local and local regularization priors from a given low-resolution image. The non-local prior takes advantage of the redundancy of similar patches in natural images, while the local prior assumes that a target pixel can be estimated by a weighted average of its neighbors. Based on the above considerations, we utilize the non-local means filter to learn a non-local prior and the steering kernel regression to learn a local prior. By assembling the two complementary regularization terms, we propose a maximum a posteriori probability framework for SR recovery. Thorough experimental results suggest that the proposed SR method can reconstruct higher quality results both quantitatively and perceptually.

527 citations


Journal ArticleDOI
01 May 2012
TL;DR: Experimental results and security analysis show that the scheme can not only achieve good encryption result, but also the key space is large enough to resist against common attacks.
Abstract: This paper proposes a novel confusion and diffusion method for image encryption. One innovation is to confuse the pixels by transforming the nucleotide into its base pair for random times, the other is to generate the new keys according to the plain image and the common keys, which can make the initial conditions of the chaotic maps change automatically in every encryption process. For any size of the original grayscale image, after being permuted the rows and columns respectively by the arrays generated by piecewise linear chaotic map (PWLCM), each pixel of the original image is encoded into four nucleotides by the deoxyribonucleic acid (DNA) coding, then each nucleotide is transformed into its base pair for random time(s) using the complementary rule, the times is generated by Chebyshev maps. Experiment results and security analysis show that the scheme can not only achieve good encryption result, but also the key space is large enough to resist against common attacks.

Journal ArticleDOI
TL;DR: A hyperspectral image denoising algorithm employing a spectral-spatial adaptive total variation (TV) model, in which the spectral noise differences and spatial information differences are both considered in the process of noise reduction.
Abstract: The amount of noise included in a hyperspectral image limits its application and has a negative impact on hyperspectral image classification, unmixing, target detection, and so on In hyperspectral images, because the noise intensity in different bands is different, to better suppress the noise in the high-noise-intensity bands and preserve the detailed information in the low-noise-intensity bands, the denoising strength should be adaptively adjusted with the noise intensity in the different bands Meanwhile, in the same band, there exist different spatial property regions, such as homogeneous regions and edge or texture regions; to better reduce the noise in the homogeneous regions and preserve the edge and texture information, the denoising strength applied to pixels in different spatial property regions should also be different Therefore, in this paper, we propose a hyperspectral image denoising algorithm employing a spectral-spatial adaptive total variation (TV) model, in which the spectral noise differences and spatial information differences are both considered in the process of noise reduction To reduce the computational load in the denoising process, the split Bregman iteration algorithm is employed to optimize the spectral-spatial hyperspectral TV model and accelerate the speed of hyperspectral image denoising A number of experiments illustrate that the proposed approach can satisfactorily realize the spectral-spatial adaptive mechanism in the denoising process, and superior denoising results are produced

Journal ArticleDOI
TL;DR: Wiens et al. as mentioned in this paper reported on the development, integration, and testing of the Mast-Unit and summarized some key characteristics of ChemCam, which consists of a Mast-unit (laser, telescope, camera, and electronics) and a Body-Unit (spectrometers, digital processing unit, and optical demultiplexer).
Abstract: ChemCam is a remote sensing instrument suite on board the "Curiosity" rover (NASA) that uses Laser-Induced Breakdown Spectroscopy (LIBS) to provide the elemental composition of soils and rocks at the surface of Mars from a distance of 1.3 to 7 m, and a telescopic imager to return high resolution context and micro-images at distances greater than 1.16 m. We describe five analytical capabilities: rock classification, quantitative composition, depth profiling, context imaging, and passive spectroscopy. They serve as a toolbox to address most of the science questions at Gale crater. ChemCam consists of a Mast-Unit (laser, telescope, camera, and electronics) and a Body-Unit (spectrometers, digital processing unit, and optical demultiplexer), which are connected by an optical fiber and an electrical interface. We then report on the development, integration, and testing of the Mast-Unit, and summarize some key characteristics of ChemCam. This confirmed that nominal or better than nominal performances were achieved for critical parameters, in particular power density (>1 GW/cm2). The analysis spot diameter varies from 350 μm at 2 m to 550 μm at 7 m distance. For remote imaging, the camera field of view is 20 mrad for 1024×1024 pixels. Field tests demonstrated that the resolution (˜90 μrad) made it possible to identify laser shots on a wide variety of images. This is sufficient for visualizing laser shot pits and textures of rocks and soils. An auto-exposure capability optimizes the dynamical range of the images. Dedicated hardware and software focus the telescope, with precision that is appropriate for the LIBS and imaging depths-of-field. The light emitted by the plasma is collected and sent to the Body-Unit via a 6 m optical fiber. The companion to this paper (Wiens et al. this issue) reports on the development of the Body-Unit, on the analysis of the emitted light, and on the good match between instrument performance and science specifications.

Proceedings ArticleDOI
16 Jun 2012
TL;DR: The cost aggregation problem is re-examined and a non-local solution is proposed which outperforms all local cost aggregation methods on the standard (Middlebury) benchmark and has great advantage in extremely low computational complexity.
Abstract: Matching cost aggregation is one of the oldest and still popular methods for stereo correspondence While effective and efficient, cost aggregation methods typically aggregate the matching cost by summing/averaging over a user-specified, local support region This is obviously only locally-optimal, and the computational complexity of the full-kernel implementation usually depends on the region size In this paper, the cost aggregation problem is re-examined and a non-local solution is proposed The matching cost values are aggregated adaptively based on pixel similarity on a tree structure derived from the stereo image pair to preserve depth edges The nodes of this tree are all the image pixels, and the edges are all the edges between the nearest neighboring pixels The similarity between any two pixels is decided by their shortest distance on the tree The proposed method is non-local as every node receives supports from all other nodes on the tree As can be expected, the proposed non-local solution outperforms all local cost aggregation methods on the standard (Middlebury) benchmark Besides, it has great advantage in extremely low computational complexity: only a total of 2 addition/subtraction operations and 3 multiplication operations are required for each pixel at each disparity level It is very close to the complexity of unnormalized box filtering using integral image which requires 6 addition/subtraction operations Unnormalized box filter is the fastest local cost aggregation method but blurs across depth edges The proposed method was tested on a MacBook Air laptop computer with a 18 GHz Intel Core i7 CPU and 4 GB memory The average runtime on the Middlebury data sets is about 90 milliseconds, and is only about 125× slower than unnormalized box filter A non-local disparity refinement method is also proposed based on the non-local cost aggregation method

Journal ArticleDOI
TL;DR: In this article, a change detection algorithm for continuous monitoring of forest disturbance at high temporal frequency is developed using all available Landsat 7 images in two years, time series models consisting of sines and cosines are estimated for each pixel for each spectral band.

01 Jan 2012
TL;DR: In this article, the authors implemented a robust face recognition system via sparse representation and convex optimization, which treated each test sample as sparse linear combination of training samples, and got the sparse solution via L1-minimization.
Abstract: In this project, we implement a robust face recognition system via sparse representation and convex optimization We treat each test sample as sparse linear combination of training samples, and get the sparse solution via L1-minimization We also explore the group sparseness (L2-norm) as well as normal L1-norm regularizationWe discuss the role of feature extraction and classification robustness to occlusion or pixel corruption of face recognition system The experiments demonstrate the choice of features is no longer critical once the sparseness is properly harnessed We also verify that the proposed algorithm outperforms other methods

Journal ArticleDOI
Bin Yang1, Shutao Li1
TL;DR: The simultaneous orthogonal matching pursuit technique is introduced to guarantee that different source images are sparsely decomposed into the same subset of dictionary bases, which is the key to image fusion.

Patent
26 Jan 2012
TL;DR: In this article, an optical imager is adapted to read an encoded symbol character comprising encoded patient information and further adapted to image an attribute of the medication package, which is used to correlate a medication package with a prescribed medication for a patient.
Abstract: A system is provided to correlate a medication package with a prescribed medication for a patient. The medication package accommodates an intended patient medication. The system includes an optical imager adapted to read an encoded symbol character comprising encoded patient information and further adapted to image an attribute of the medication package. The optical imager comprises a two-dimensional image sensor array and an imaging lens for focusing an image on the two-dimensional image sensor array. The two-dimensional image sensor array has a plurality of pixels formed in a plurality of rows and columns of pixels. The optical imager further includes a digital link to transmit a segment of data. The segment of data includes the patient information encoded in the encoded symbol character and the attribute of the medication package. The system further includes a host computer connected to the digital link to receive the segment of data from the optical imager, and a database coupled to the host computer via a digital connection. The database correlates the segment of data to (a) a patient record, and (b) a medication package attribute library.

Journal ArticleDOI
TL;DR: The proposed method deals with the joint use of the spatial and the spectral information provided by the remote-sensing images with very high spatial resolution and is competitive with other contextual methods.

Proceedings ArticleDOI
16 Jun 2012
TL;DR: A series of modifications that alter the working of ViBe are introduced, like the inhibition of propagation around internal borders or the distinction between the updating and segmentation masks, or process the output, for example by some operations on the connected components.
Abstract: Motion detection plays an important role in most video based applications. One of the many possible ways to detect motion consists in background subtraction. This paper discusses experiments led for a particular background subtraction technique called ViBe. This technique models the background with a set of samples for each pixel and compares new frames, pixel by pixel, to determine if a pixel belongs to the background or to the foreground. In its original version, the scope of ViBe is limited to background modeling. In this paper, we introduce a series of modifications that alter the working of ViBe, like the inhibition of propagation around internal borders or the distinction between the updating and segmentation masks, or process the output, for example by some operations on the connected components. Experimental results obtained for video sequences provided on the workshop site validate the improvements of the proposed modifications.

Journal ArticleDOI
TL;DR: A weighted mode filtering method based on a joint histogram that addresses a flickering problem and improves the accuracy of depth video and the temporally consistent estimate on depth video is extended into temporally neighboring frames.
Abstract: This paper presents a novel approach for depth video enhancement. Given a high-resolution color video and its corresponding low-quality depth video, we improve the quality of the depth video by increasing its resolution and suppressing noise. For that, a weighted mode filtering method is proposed based on a joint histogram. When the histogram is generated, the weight based on color similarity between reference and neighboring pixels on the color image is computed and then used for counting each bin on the joint histogram of the depth map. A final solution is determined by seeking a global mode on the histogram. We show that the proposed method provides the optimal solution with respect to L1 norm minimization. For temporally consistent estimate on depth video, we extend this method into temporally neighboring frames. Simple optical flow estimation and patch similarity measure are used for obtaining the high-quality depth video in an efficient manner. Experimental results show that the proposed method has outstanding performance and is very efficient, compared with existing methods. We also show that the temporally consistent enhancement of depth video addresses a flickering problem and improves the accuracy of depth video.

Proceedings ArticleDOI
29 Oct 2012
TL;DR: A new example-based method to colorize a gray image using a fast cascade feature matching scheme to automatically find correspondences between superpixels of the reference and target images, which speeds up the colorization process and empowers the colorizations to exhibit a much higher extent of spatial consistency.
Abstract: We present a new example-based method to colorize a gray image. As input, the user needs only to supply a reference color image which is semantically similar to the target image. We extract features from these images at the resolution of superpixels, and exploit these features to guide the colorization process. Our use of a superpixel representation speeds up the colorization process. More importantly, it also empowers the colorizations to exhibit a much higher extent of spatial consistency in the colorization as compared to that using independent pixels. We adopt a fast cascade feature matching scheme to automatically find correspondences between superpixels of the reference and target images. Each correspondence is assigned a confidence based on the feature matching costs computed at different steps in the cascade, and high confidence correspondences are used to assign an initial set of chromatic values to the target superpixels. To further enforce the spatial coherence of these initial color assignments, we develop an image space voting framework which draws evidence from neighboring superpixels to identify and to correct invalid color assignments. Experimental results and user study on a broad range of images demonstrate that our method with a fixed set of parameters yields better colorization results as compared to existing methods.

Posted Content
TL;DR: In this article, the pairwise edge potentials are defined by a linear combination of Gaussian kernels, which improves segmentation and labeling accuracy significantly and has been shown to improve segmentation performance.
Abstract: Most state-of-the-art techniques for multi-class image segmentation and labeling use conditional random fields defined over pixels or image regions. While region-level models often feature dense pairwise connectivity, pixel-level models are considerably larger and have only permitted sparse graph structures. In this paper, we consider fully connected CRF models defined on the complete set of pixels in an image. The resulting graphs have billions of edges, making traditional inference algorithms impractical. Our main contribution is a highly efficient approximate inference algorithm for fully connected CRF models in which the pairwise edge potentials are defined by a linear combination of Gaussian kernels. Our experiments demonstrate that dense connectivity at the pixel level substantially improves segmentation and labeling accuracy.

Journal ArticleDOI
TL;DR: Experimental results reveal that the proposed method offers lower distortion than DE by providing more compact neighborhood sets and allowing embedded digits in any notational system and is secure under the detection of some well-known steganalysis techniques.
Abstract: This paper proposes a new data-hiding method based on pixel pair matching (PPM). The basic idea of PPM is to use the values of pixel pair as a reference coordinate, and search a coordinate in the neighborhood set of this pixel pair according to a given message digit. The pixel pair is then replaced by the searched coordinate to conceal the digit. Exploiting modification direction (EMD) and diamond encoding (DE) are two data-hiding methods proposed recently based on PPM. The maximum capacity of EMD is 1.161 bpp and DE extends the payload of EMD by embedding digits in a larger notational system. The proposed method offers lower distortion than DE by providing more compact neighborhood sets and allowing embedded digits in any notational system. Compared with the optimal pixel adjustment process (OPAP) method, the proposed method always has lower distortion for various payloads. Experimental results reveal that the proposed method not only provides better performance than those of OPAP and DE, but also is secure under the detection of some well-known steganalysis techniques.

02 Sep 2012
TL;DR: An extensive evaluation of the Kinect depth sensor is performed and issues such as 3D resolution and precision, structural noise, multi-cam setups and transient response of the sensor are investigated.
Abstract: This technical report describes our evaluation of the Kinect depth sensor by Microsoft for Computer Vision applications. The depth sensor is able to return images like an ordinary camera, but instead of color, each pixel value represents the distance to the point. As such, the sensor can be seen as a range- or 3D-camera. We have used the sensor in several different computer vision projects and this document collects our experiences with the sensor. We are only focusing on the depth sensing capabilities of the sensor since this is the real novelty of the product in relation to computer vision. The basic technique of the depth sensor is to emit an infrared light pattern (with an IR laser diode) and calculate depth from the reflection of the light at different positions (using a traditional IR sensitive camera). In this report, we perform an extensive evaluation of the depth sensor and investigate issues such as 3D resolution and precision, structural noise, multi-cam setups and transient response of the sensor. The purpose is to give the reader a well-founded background to choose whether or not the Kinect sensor is applicable to a specific problem.

Proceedings ArticleDOI
14 May 2012
TL;DR: This work utilizes sliding window detectors trained from object views to assign class probabilities to pixels in every RGB-D frame, and performs efficient inference on a Markov Random Field over the voxels, combining cues from view-based detection and 3D shape, to label the scene.
Abstract: We propose a view-based approach for labeling objects in 3D scenes reconstructed from RGB-D (color+depth) videos. We utilize sliding window detectors trained from object views to assign class probabilities to pixels in every RGB-D frame. These probabilities are projected into the reconstructed 3D scene and integrated using a voxel representation. We perform efficient inference on a Markov Random Field over the voxels, combining cues from view-based detection and 3D shape, to label the scene. Our detection-based approach produces accurate scene labeling on the RGB-D Scenes Dataset and improves the robustness of object detection.

Journal ArticleDOI
TL;DR: In this article, the spatial resolution of digital particle image velocimetry (DPIV) is analyzed as a function of the tracer particles and the imaging and recording system.
Abstract: This work analyzes the spatial resolution that can be achieved by digital particle image velocimetry (DPIV) as a function of the tracer particles and the imaging and recording system. As the in-plane resolution for window-correlation evaluation is related by the interrogation window size, it was assumed in the past that single-pixel ensemble-correlation increases the spatial resolution up to the pixel limit. However, it is shown that the determining factor limiting the resolution of single-pixel ensemble-correlation are the size of the particle images, which is dependent on the size of the particles, the magnification, the f-number of the imaging system, and the optical aberrations. Furthermore, since the minimum detectable particle image size is determined by the pixel size of the camera sensor in DPIV, this quantity is also considered in this analysis. It is shown that the optimal magnification that results in the best possible spatial resolution can be estimated from the particle size, the lens properties, and the pixel size of the camera. Thus, the information provided in this paper allows for the optimization of the camera and objective lens choices as well as the working distance for a given setup. Furthermore, the possibility of increasing the spatial resolution by means of particle tracking velocimetry (PTV) is discussed in detail. It is shown that this technique allows to increase the spatial resolution to the subpixel limit for averaged flow fields. In addition, PTV evaluation methods do not show bias errors that are typical for correlation-based approaches. Therefore, this technique is best suited for the estimation of velocity profiles.

Journal ArticleDOI
TL;DR: A new fast dehazing method from single image based on filtering that is fast with linear complexity in the number of pixels of the input image and can be further accelerated using GPU, which makes this method applicable for real-time requirement.
Abstract: In this paper, we propose a new fast dehazing method from single image based on filtering. The basic idea is to compute an accurate atmosphere veil that is not only smoother, but also respect with depth information of the underlying image. We firstly obtain an initial atmosphere scattering light through median filtering, then refine it by guided joint bilateral filtering to generate a new atmosphere veil which removes the abundant texture information and recovers the depth edge information. Finally, we solve the scene radiance using the atmosphere attenuation model. Compared with exiting state of the art dehazing methods, our method could get a better dehazing effect at distant scene and places where depth changes abruptly. Our method is fast with linear complexity in the number of pixels of the input image; furthermore, as our method can be performed in parallel, thus it can be further accelerated using GPU, which makes our method applicable for real-time requirement.

Journal ArticleDOI
TL;DR: A new algorithm meant for content based image retrieval (CBIR) and object tracking applications is presented, which differs from the existing LBP in a manner that it extracts the information based on distribution of edges in an image.

Journal ArticleDOI
TL;DR: The performance comparisons of the proposed NR operator with a traditional ratio operator and a log-ratio operator indicate that the NR operator is superior to these traditional methods and produces better detection results.
Abstract: This letter presents a novel neighborhood-based ratio (NR) operator to produce a difference image for change detection in synthetic aperture radar (SAR) images. In order to reduce the negative influence of speckle noise on SAR images, the proposed NR operator produces a difference image by combining gray level information and spatial information of neighbor pixels. The performance comparisons of the proposed operator with a traditional ratio operator and a log-ratio operator indicate that the NR operator is superior to these traditional methods and produces better detection results.