scispace - formally typeset
Search or ask a question

Showing papers on "Image scaling published in 2016"


Journal ArticleDOI
TL;DR: This paper presents an algorithm for electron tomographic reconstruction and sparse image interpolation that exploits the nonlocal redundancy in images, and demonstrates that the algorithm produces higher quality reconstructions on both simulated and real electron microscope data, along with improved convergence properties compared to other methods.
Abstract: Many material and biological samples in scientific imaging are characterized by nonlocal repeating structures. These are studied using scanning electron microscopy and electron tomography. Sparse sampling of individual pixels in a two-dimensional image acquisition geometry, or sparse sampling of projection images with large tilt increments in a tomography experiment, can enable high speed data acquisition and minimize sample damage caused by the electron beam. In this paper, we present an algorithm for electron tomographic reconstruction and sparse image interpolation that exploits the nonlocal redundancy in images. We adapt a framework, termed plug-and-play priors, to solve these imaging problems in a regularized inversion setting. The power of the plug-and-play approach is that it allows a wide array of modern denoising algorithms to be used as a “prior model” for tomography and image interpolation. We also present sufficient mathematical conditions that ensure convergence of the plug-and-play approach, and we use these insights to design a new nonlocal means denoising algorithm. Finally, we demonstrate that the algorithm produces higher quality reconstructions on both simulated and real electron microscope data, along with improved convergence properties compared to other methods.

267 citations


Journal ArticleDOI
TL;DR: The proposed residual interpolation (RI) performs the interpolation in a residual domain, where the residuals are differences between observed and tentatively estimated pixel values, and is incorporated into the gradient-based threshold free algorithm, which is one of the state-of-the-art Bayer demosaicking algorithms.
Abstract: In this paper, we propose residual interpolation (RI) as an alternative to color difference interpolation, which is a widely accepted technique for color image demosaicking. Our proposed RI performs the interpolation in a residual domain, where the residuals are differences between observed and tentatively estimated pixel values. Our hypothesis for the RI is that if image interpolation is performed in a domain with a smaller Laplacian energy, its accuracy is improved. Based on the hypothesis, we estimate the tentative pixel values to minimize the Laplacian energy of the residuals. We incorporate the RI into the gradient-based threshold free algorithm, which is one of the state-of-the-art Bayer demosaicking algorithms. Experimental results demonstrate that our proposed demosaicking algorithm using the RI surpasses the state-of-the-art algorithms for the Kodak, the IMAX, and the beyond Kodak data sets.

120 citations


Journal ArticleDOI
TL;DR: A new interpolation method for DoFP polarimeters is presented by using intensity correlation to detect edges and then implement interpolation along edges, which can achieve better visual effects and a lower RMSE than other methods.
Abstract: Division of focal plane (DoFP) polarimeters operate by integrating micro-polarizer elements with a focal plane. These polarization imaging sensors reduce spatial resolution output and each pixel has a varying instantaneous field of view (IFoV). These drawbacks can be mitigated by applying proper interpolation methods. In this paper, we present a new interpolation method for DoFP polarimeters by using intensity correlation. We employ the correlation of intensity measurements in different orientations to detect edges and then implement interpolation along edges. The performance of the proposed method is compared with several previous methods by using root mean square error (RMSE) comparison and visual comparison. Experimental results showed that our proposed method can achieve better visual effects and a lower RMSE than other methods.

81 citations


Journal ArticleDOI
TL;DR: In this paper, a parametric study of the factors contributing to peak-locking, a known bias error source in particle image velocimetry (PIV), is conducted using synthetic data that are processed with a state-of-the-art PIV algorithm.
Abstract: A parametric study of the factors contributing to peak-locking, a known bias error source in particle image velocimetry (PIV), is conducted using synthetic data that are processed with a state-of-the-art PIV algorithm. The investigated parameters include: particle image diameter, image interpolation techniques, the effect of asymmetric versus symmetric window deformation, number of passes and the interrogation window size. Some of these parameters are found to have a profound effect on the magnitude of the peak-locking error. The effects for specific PIV cameras are also studied experimentally using a precision turntable to generate a known rotating velocity field. Image time series recorded using this experiment show a linear range of pixel and sub-pixel shifts ranging from 0 to ±4 pixels. Deviations in the constant vorticity field (ω z ) reveal how peak-locking can be affected systematically both by varying parameters of the detection system such as the focal distance and f-number, and also by varying the settings of the PIV analysis. A new a priori technique for reducing the bias errors associated with peak-locking in PIV is introduced using an optical diffuser to avoid undersampled particle images during the recording of the raw images. This technique is evaluated against other a priori approaches using experimental data and is shown to perform favorably. Finally, a new a posteriori anti peak-locking filter (APLF) is developed and investigated, which shows promising results for both synthetic data and real measurements for very small particle image sizes.

61 citations


Journal ArticleDOI
TL;DR: A novel single image SR method that exploits both the local geometric duality (GD) and the non-local similarity of images to constrain the super-resolved results is proposed.
Abstract: Super-resolution (SR) from a single image plays an important role in many computer vision applications. It aims to estimate a high-resolution (HR) image from an input low- resolution (LR) image. To ensure a reliable and robust estimation of the HR image, we propose a novel single image SR method that exploits both the local geometric duality (GD) and the non-local similarity of images. The main principle is to formulate these two typically existing features of images as effective priors to constrain the super-resolved results. In consideration of this principle, the robust soft-decision interpolation method is generalized as an outstanding adaptive GD (AGD)-based local prior. To adaptively design weights for the AGD prior, a local non-smoothness detection method and a directional standard-deviation-based weights selection method are proposed. After that, the AGD prior is combined with a variational-framework-based non-local prior. Furthermore, the proposed algorithm is speeded up by a fast GD matrices construction method, which primarily relies on the selective pixel processing. The extensive experimental results verify the effectiveness of the proposed method compared with several state-of-the-art SR algorithms.

49 citations


Journal ArticleDOI
01 Mar 2016-Optik
TL;DR: The original pixels are not effected during data embedding in the proposed scheme which assure reversibility, and the scheme provides average embedding payload 2.97 bits per pixel with good visual quality measured by peak signal to noise ratio (PSNR).

46 citations


Journal ArticleDOI
TL;DR: A four-direction residual interpolation (FDRI) method for color filter array interpolation that provides a superior performance in terms of objective and subjective quality compared with the conventional state-of-the-art demosaicking methods.
Abstract: In this paper, we propose a four-direction residual interpolation (FDRI) method for color filter array interpolation. The proposed algorithm exploits a guided filtering process to generate the tentative image. The residual image is generated by exploiting the tentative and original images. We use an FDRI algorithm to more accurately estimate the missing pixel values; the estimated image is adaptively combined with a joint inverse gradient weight. Based on the experimental results, the proposed method provides a superior performance in terms of objective and subjective quality compared with the conventional state-of-the-art demosaicking methods.

37 citations


Journal ArticleDOI
TL;DR: This work proposes a robust interpolation scheme by using the nonlocal geometric similarities to construct the HR image by solving a regularized least squares problem that is built upon a number of dual-reference patches drawn from the given LR image and regularized by the directional gradients of these patches.
Abstract: Image interpolation offers an efficient way to compose a high-resolution (HR) image from the observed low-resolution (LR) image. Advanced interpolation techniques design the interpolation weighting coefficients by solving a minimum mean-square-error (MMSE) problem in which the local geometric similarity is often considered. However, using local geometric similarities cannot usually make the MMSE-based interpolation as reliable as expected. To solve this problem, we propose a robust interpolation scheme by using the nonlocal geometric similarities to construct the HR image. In our proposed method, the MMSE-based interpolation weighting coefficients are generated by solving a regularized least squares problem that is built upon a number of dual-reference patches drawn from the given LR image and regularized by the directional gradients of these patches. Experimental results demonstrate that our proposed method offers a remarkable quality improvement as compared to some state-of-the-art methods, both objectively and subjectively.

31 citations


Journal ArticleDOI
TL;DR: Improvements in image gap restoration through the incorporation of edge-based directional interpolation within multi-scale pyramid transforms are presented, demonstrating that the proposed method improves peak-signal-to-noise-ratio by 1-5 dB compared with a range of best published works.
Abstract: This paper presents improvements in image gap restoration through the incorporation of edge-based directional interpolation within multi-scale pyramid transforms. Two types of image edges are reconstructed: 1) the local edges or textures, inferred from the gradients of the neighboring pixels and 2) the global edges between image objects or segments, inferred using a Canny detector. Through a process of pyramid transformation and downsampling, the image is progressively transformed into a series of reduced size layers until at the pyramid apex the gap size is one sample. At each layer, an edge skeleton image is extracted for edge-guided interpolation. The process is then reversed; from the apex, at each layer, the missing samples are estimated (an iterative method is used in the last stage of upsampling), up-sampled, and combined with the available samples of the next layer. Discrete cosine transform and a family of discrete wavelet transforms are utilized as alternatives for pyramid construction. Evaluations over a range of images, in regular and random loss pattern, at loss rates of up to 40%, demonstrate that the proposed method improves peak-signal-to-noise-ratio by 1–5 dB compared with a range of best published works.

25 citations


Journal ArticleDOI
TL;DR: This paper provides a study of spatial undersampling in atomic force microscopy (AFM) imaging followed by different image reconstruction techniques based on sparse approximation as well as interpolation, revealing that using a simple raster scanning pattern in combination with conventional image interpolation performs very well.
Abstract: This paper provides a study of spatial undersampling in atomic force microscopy (AFM) imaging followed by different image reconstruction techniques based on sparse approximation as well as interpolation. The main reasons for using undersampling is that it reduces the path length and thereby the scanning time as well as the amount of interaction between the AFM probe and the specimen. It can easily be applied on conventional AFM hardware. Due to undersampling, it is necessary to subsequently process the acquired image in order to reconstruct an approximation of the image. Based on real AFM cell images, our simulations reveal that using a simple raster scanning pattern in combination with conventional image interpolation performs very well. Moreover, this combination enables a reduction by a factor 10 of the scanning time while retaining an average reconstruction quality around 36 dB PSNR on the tested cell images.

22 citations


Journal ArticleDOI
TL;DR: The proposed image enhancement algorithm eliminates edge halos and jagged artifacts, whereas the fine image structures are reserved effectively, and can achieve better results than the state-of-the-art methods both subjectively and objectively.

Journal ArticleDOI
TL;DR: The theory of stationary velocity fields is made use to facilitate interactive non-linear image interpolation and plausible extrapolation for high quality rendering of large deformations and devise an efficient image warping method on the GPU.
Abstract: Large image deformations pose a challenging problem for the visualization and statistical analysis of 3D image ensembles which have a multitude of applications in biology and medicine. Simple linear interpolation in the tangent space of the ensemble introduces artifactual anatomical structures that hamper the application of targeted visual shape analysis techniques. In this work we make use of the theory of stationary velocity fields to facilitate interactive non-linear image interpolation and plausible extrapolation for high quality rendering of large deformations and devise an efficient image warping method on the GPU. This does not only improve quality of existing visualization techniques, but opens up a field of novel interactive methods for shape ensemble analysis. Taking advantage of the efficient non-linear 3D image warping, we showcase four visualizations: 1) browsing on-the-fly computed group mean shapes to learn about shape differences between specific classes, 2) interactive reformation to investigate complex morphologies in a single view, 3) likelihood volumes to gain a concise overview of variability and 4) streamline visualization to show variation in detail, specifically uncovering its component tangential to a reference surface. Evaluation on a real world dataset shows that the presented method outperforms the state-of-the-art in terms of visual quality while retaining interactive frame rates. A case study with a domain expert was performed in which the novel analysis and visualization methods are applied on standard model structures, namely skull and mandible of different rodents, to investigate and compare influence of phylogeny, diet and geography on shape. The visualizations enable for instance to distinguish (population-)normal and pathological morphology, assist in uncovering correlation to extrinsic factors and potentially support assessment of model quality.

Journal ArticleDOI
TL;DR: 2DCrypt is a modified Paillier cryptosystem-based image scaling and cropping scheme for multi-user settings that allows cloud datacenters to scale and crop an image in the encrypted domain and is such that multiple users can view or process the images without sharing any encryption keys.
Abstract: The evolution of cloud computing and a drastic increase in image size are making the outsourcing of image storage and processing an attractive business model. Although this outsourcing has many advantages, ensuring data confidentiality in the cloud is one of the main concerns. There are state-of-the-art encryption schemes for ensuring confidentiality in the cloud. However, such schemes do not allow cloud datacenters to perform operations over encrypted images. In this paper, we address this concern by proposing 2DCrypt , a modified Paillier cryptosystem-based image scaling and cropping scheme for multi-user settings that allows cloud datacenters to scale and crop an image in the encrypted domain. To anticipate a high storage overhead resulted from the naive per-pixel encryption, we propose a space-efficient tiling scheme that allows tile-level image scaling and cropping operations. Basically, instead of encrypting each pixel individually, we are able to encrypt a tile of pixels. 2DCrypt is such that multiple users can view or process the images without sharing any encryption keys—a requirement desirable for practical deployments in real organizations. Our analysis and results show that 2DCrypt is INDistinguishable under Chosen Plaintext Attack secure and incurs an acceptable overhead. When scaling a $512\times512$ image by a factor of two, 2DCrypt requires an image user to download approximately 5.3 times more data than the un-encrypted scaling and need to work approximately 2.3 s more for obtaining the scaled image in a plaintext.

Journal ArticleDOI
TL;DR: A motion compensated FRUC algorithm is proposed, which is able to acquire more accurate MVs by referencing neighboring frames with hierarchical motion filed construction block size and an MV mapping stage and an adaptive MV smoothing method.
Abstract: Motion-compensated frame rate up-conversion (FRUC) is a method for providing better video image quality than non-motion-based methods. However, incorrect motion vector (MV) prediction can lead to continued induction of debris and artifacts. In this paper, a motion compensated FRUC algorithm is proposed, which is able to acquire more accurate MVs by referencing neighboring frames with hierarchical motion filed construction block size and an MV mapping stage. The block size of hierarchical MV filed construction has been enlarged in order to acquire more motion information, and the size of the MV mapping and image interpolation stage has been reduced to obtain a more accurate interpolated result. In order to smooth the MV, a median filter is considered with sum of absolute difference value and constructed with an adaptive MV smoothing method. For the purpose of suppressing the debris, a noise removal filter is applied after the interpolation operation. According to the experimental result, the proposed approach is able to improve visual quality performance by 0.82 dB on average. Subjective analysis shows that annoying artifacts are significantly reduced.

Patent
05 Apr 2016
TL;DR: In this paper, an online content system, such as a digital magazine, includes an image scaling engine consisting of a convolutional neural network (CNN) for increasing the resolution of images.
Abstract: An online content system, such as a digital magazine, includes an image scaling engine for increasing the resolution of images. The image scaling engine comprises a convolutional neural network. An input image is preprocessed for use as inputs to a convolutional neural network (CNN). The preprocessed input image pixel values are used as inputs to the CNN. The CNN comprises convolutional layers and dense layers for determining image features and increasing image resolution. The CNN is trained using backpropagation to adjust model weights and biases. Each convolutional layer of a CNN detects features in an image by comparing image subregions to a set of known kernels and determining similarities between subregions and kernels using a convolution operation. The dense layers of the CNN have full connections to all of the outputs of a previous layer to determine the specific target output result such as output image pixel values.

Journal ArticleDOI
TL;DR: A new feature matching method is provided that overcomes problems and offers thousands of matches by a novel feature point detection strategy, applying a one-to-many matching scheme and substitute the ratio-test by adding geometric constraints to achieve geometric correct matches at repetitive image regions.
Abstract: . This paper investigates the performance of SIFT-based image matching regarding large differences in image scaling and rotation, as this is usually the case when trying to match images captured from UAVs and airplanes. This task represents an essential step for image registration and 3d-reconstruction applications. Various real world examples presented in this paper show that SIFT, as well as A-SIFT perform poorly or even fail in this matching scenario. Even if the scale difference in the images is known and eliminated beforehand, the matching performance suffers from too few feature point detections, ambiguous feature point orientations and rejection of many correct matches when applying the ratio-test afterwards. Therefore, a new feature matching method is provided that overcomes these problems and offers thousands of matches by a novel feature point detection strategy, applying a one-to-many matching scheme and substitute the ratio-test by adding geometric constraints to achieve geometric correct matches at repetitive image regions. This method is designed for matching almost nadir-directed images with low scene depth, as this is typical in UAV and aerial image matching scenarios. We tested the proposed method on different real world image pairs. While standard SIFT failed for most of the datasets, plenty of geometrical correct matches could be found using our approach. Comparing the estimated fundamental matrices and homographies with ground-truth solutions, mean errors of few pixels can be achieved.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed real-time edge-guided interpolation method outperforms the conventional ones both in terms of objective and subjective image qualities with low computational complexity.
Abstract: In this paper, a novel real-time edge-guided interpolation method is presented to produce the high resolution image without any disturbing artifacts such as blurring, jagging, and overshooting. The proposed method computes the first- and second-order derivatives of an input image to measure the geometry of edges in the image. Based on these measures, the value of a pixel to be interpolated is estimated along four directions using Taylor series approximation. The four directional estimates are adaptively fused based on the orientation of a local edge to obtain the edge-guided interpolation output. Experimental results demonstrate that the proposed interpolation method outperforms the conventional ones both in terms of objective and subjective image qualities with low computational complexity1.

Patent
23 Nov 2016
TL;DR: In this article, an image processing device consisting of a gradient information computation element, a direction judgment element, an image mixing element, and a direction interpolation element was proposed. But the device was not suitable for the task of image generation.
Abstract: The invention provides an image processing device and an image processing method The image processing device comprises a gradient information computation element, a direction judgment element, a direction interpolation element and an image mixing element The gradient information computation element is used for processing an input image to generate gradient values and gradient angles respectively corresponding to a plurality of input pixels in the input image; the direction judgment element is used for generating a plurality of interpolation angles and direction credibility values according to the gradient values and the gradient angles corresponding to the input pixels; the direction interpolation element is used for implementing direction interpolation processing on the input image according to the interpolation angles, and generating a first image having a different resolution ratio with the input image; and the image mixing element is used for receiving the first image and a second image generated by implementing image interpolation processing on the input image, and mixing the first image with the second image according to the direction credibility values to form an output image

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed two stage data hiding technique with high embedding capacity gives better results in terms of embeddingcapacity and comparable results in Terms of visual quality.

Patent
13 Apr 2016
TL;DR: In this article, a hybrid feature model fusing a salient map, an edge line map and a gradient map is adopted to obtain an energy function, and according to the energy function a line clipping operation is carried out to complete image scaling.
Abstract: The invention relates to an image scaling method based on content awareness, and relates to graphic image conversion in an image plane. An energy function is obtained by adopting a hybrid feature model fusing a salient map, an edge line map and a gradient map, and according to the energy function, a line clipping operation is carried out to complete image scaling. The image scaling method comprises the steps of: inputting a color image to carry out preprocessing; simultaneously carrying out extraction of the salient map and a salient target image of an original color image, extraction of the edge map of a gray scale image, which is fused with line information, and extraction of the gradient map of the gray scale image; fusing three feature maps by utilizing an HFPM (hybrid feature model) algorithm so as to obtain the energy function; and clipping the original image by using a line clipping algorithm. The method disclosed by the invention overcomes the defects that an existing line clipping method adopts a gradient map definition energy function of an image and still causes distortion and loss of partial image information in the image scaling process.

Proceedings ArticleDOI
01 May 2016
TL;DR: In this paper, a low complexity image scaling algorithm is proposed which shows significant reduction in hardware cost and energy over existing architectures without significant degradation in quality.
Abstract: Image scaling is one of the widely used techniques in various portable devices to fit the image in their respective displays. Traditional image scaling architectures consume more power and hardware, making them inefficient for use in portable devices. In this paper, a low complexity image scaling algorithm is proposed. In the proposed algorithm, the target pixel is computed either by bilinear interpolation or by replication. The edge catching module in the architecture determines the method of computation which makes the design energy efficient. Further, algebraic manipulation is done and the resulting pipelined architecture shows significant reduction in hardware cost. In order to evaluate the efficacy, the proposed and existing algorithms are implemented in MATLAB and simulated using standard benchmark images. The proposed design is synthesized in Synopsys Design Compiler using 90-nm CMOS process which shows 43.3% reduced gate count and 25.9% reduction in energy over existing architectures without significant degradation in quality.

Journal ArticleDOI
TL;DR: This paper proposes a learning-to-rank approach for automatically estimating the scaling factor based on the normalized energy density features and moment features of image scaling factor estimation, which can effectively eliminate the long-known ambiguity between upscaling and downscaling in the analysis of resampling.

Journal ArticleDOI
TL;DR: This paper proposes an adaptive guided image filter to correct errors of the intermediate warped interpolation image, which results in an inaccurate discrete approximation of the temporal derivative and thus ends up affecting the accuracy of the estimated flow field.

Journal ArticleDOI
TL;DR: Simulation results suggest that the proposed spatial interpolation method achieves a very competitive performance in both subjective visual quality and objective image quality (in terms of PSNR and structural similarity index measurement (SSIM)), compared to some recently proposed structured sparse representation-based methods.

Journal ArticleDOI
TL;DR: By replacing the IED with the proposed CED in the CGI framework, the total run time of the fast CGI is only 1/4 of the original CGI's on average, and the total computation time is significantly reduced.
Abstract: A recently introduced image interpolation method, called the contrast-guided interpolation (CGI), has shown superior performance on producing high-quality interpolated image. However, its iterative edge diffusion (IED) process for diffusing continuous-valued directional variation (DV) fields inevitably incurs high computational complexity due to its iterative optimization process. The key objective of this letter lies in how to greatly reduce the computation of this diffusion process while maintaining CGI's superior performance on its interpolated image. The novelty of this letter started with a critical observation as follows. Since each diffused DV field needs to be thresholded for generating a binary contrast-guided decision map (CDM) in the subsequent step, such binarization operation will definitely destroy the fidelity that was preserved previously through the data term of the IED's energy functional. Therefore, the data term is lifted in our approach to yield a new energy functional. It turns out that the diffusion equation derived from this simplified functional is, in fact, the well-known heat equation , from which a highly attractive property of the heat equation can be exploited for conducting diffusion. That is, given a desired amount of diffusion to yield, it can be realized by simply convolving the DV field with a Gaussian kernel once , rather than gradually updating the DV field through iterations. Note that the variance of the Gaussian kernel corresponds to the amount of diffusion desired. As a result, the total computation time is significantly reduced. Extensive simulation results have shown that the proposed CED can generate nearly identical CDMs as those produced by the IED, while only requiring about 1/10 of its computation time. By replacing the IED with the proposed CED in the CGI framework, the total run time of our fast CGI is only 1/4 of the original CGI's on average.

Journal ArticleDOI
TL;DR: A CUDA-C-based code, LBM-C, was developed specifically for this work and leverages GPU hardware in order to carry out computations and it is shown that binary image segmentation may underestimate the true permeability of the sample.
Abstract: Digital material characterisation from microstructural geometry is an emerging field in computer simulation. For permeability characterisation, a variety of studies exist where the lattice Boltzmann method (LBM) has been used in conjunction with computed tomography (CT) imaging to simulate fluid flow through microscopic rock pores. While these previous works show that the technique is applicable, the use of binary image segmentation and the bounceback boundary condition results in a loss of grain surface definition when the modelled geometry is compared to the original CT image. We apply the immersed moving boundary (IMB) condition of Noble and Torczynski as a partial bounceback boundary condition which may be used to better represent the geometric definition provided by a CT image. The IMB condition is validated against published work on idealised porous geometries in both 2D and 3D. Following this, greyscale image segmentation is applied to a CT image of Diemelstadt sandstone. By varying the mapping of CT voxel densities to lattice sites, it is shown that binary image segmentation may underestimate the true permeability of the sample. A CUDA-C-based code, LBM-C, was developed specifically for this work and leverages GPU hardware in order to carry out computations.

Journal ArticleDOI
TL;DR: The strength of the proposed global weighting of AACA algorithm depends on employing solely the pheromone matrix information present on any group of four adjacent pixels to decide which case deserves a maximum global weight value or not.
Abstract: This paper presents an advance on image interpolation based on ant colony algorithm (AACA) for high resolution image scaling. The difference between the proposed algorithm and the previously proposed optimization of bilinear interpolation based on ant colony algorithm (OBACA) is that AACA uses global weighting, whereas OBACA uses local weighting scheme. The strength of the proposed global weighting of AACA algorithm depends on employing solely the pheromone matrix information present on any group of four adjacent pixels to decide which case deserves a maximum global weight value or not. Experimental results are further provided to show the higher performance of the proposed AACA algorithm with reference to the algorithms mentioned in this paper.

Patent
04 May 2016
TL;DR: In this paper, a core CT image processing-based remaining oil micro-occurrence representing method was proposed to obtain the three-dimensional configurations of all the pores and throats as well as the topological connection relation among the threedimensional configurations.
Abstract: The invention relates to the field of petroleum reservoir and image processing, in particular to a core CT image processing-based remaining oil micro-occurrence representing method. The method comprises the following steps: pre-processing of a CT image: carrying out CT image interpolation on the basis of three Lagrange interpolations; image segmentation and modification; CT image-based pore and throat network modeling and quantitative characterization of remaining oil micro-occurrence. According to the core CT image processing-based remaining oil micro-occurrence representing method, the steps of CT image pre-processing, image interpolation, image-based medium segmentation, core model three-dimensional reconstruction, pore/throat segmentation and pore/throat topological structure reconstruction are carried out to obtain the three-dimensional configurations of all the pores and throats as well as the topological connection relation among the three-dimensional configurations, and finally obtain the quantitative characterization of the remaining oil micro-occurrence.

Proceedings ArticleDOI
24 Jul 2016
TL;DR: This work describes a procedure for interpolating panoramas captured at four corners of a rectangle area without geometry, and presents experimental results including walkthrough in real time.
Abstract: We propose a method to generate new views of a scene by capturing a few panorama images in real space and interpolating captured images. We describe a procedure for interpolating panoramas captured at four corners of a rectangle area without geometry, and present experimental results including walkthrough in real time. Our image-based method enables walking through space much more easily than using 3D modeling and rendering.

Journal ArticleDOI
TL;DR: A novel method for real-time horizon-based attitude estimation using panoramic images is presented with results from real flight tests showing accurate attitude estimation for a rotary-wing aircraft in cluttered environments.
Abstract: In this paper, a novel method for real-time horizon-based attitude estimation using panoramic images is presented with results from real flight tests showing accurate attitude estimation for a rotary-wing aircraft in cluttered environments. The vision system is biologically inspired from the function of the Ocelli organ present in some insects and the fact that ultraviolet images provide better sky/ground contrast enhancement. A new method for panoramic sky/ground thresholding is proposed, consisting of a sky/ground masking and a sun-tracking system that works effectively even when the horizon line is difficult to detect by normal thresholding methods due to flares and other effects from the presence of the sun in the image. The use of optic flow to determine body rates is investigated using the panoramic image and the image interpolation algorithm. A Kalman filter is used to fuse the unfiltered measurements from inertial sensors and the vision system.