scispace - formally typeset
Search or ask a question

Showing papers on "Interpolation published in 2018"


Posted Content
Tero Karras1, Samuli Laine1, Timo Aila1
TL;DR: This article proposed an alternative generator architecture for GANs, borrowing from style transfer literature, which leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images.
Abstract: We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.

1,612 citations


Proceedings ArticleDOI
18 Jun 2018
TL;DR: In this paper, an end-to-end convolutional neural network is proposed for variable-length multi-frame video interpolation, where the motion interpretation and occlusion reasoning are jointly modeled.
Abstract: Given two consecutive frames, video interpolation aims at generating intermediate frame(s) to form both spatially and temporally coherent video sequences. While most existing methods focus on single-frame interpolation, we propose an end-to-end convolutional neural network for variable-length multi-frame video interpolation, where the motion interpretation and occlusion reasoning are jointly modeled. We start by computing bi-directional optical flow between the input images using a U-Net architecture. These flows are then linearly combined at each time step to approximate the intermediate bi-directional optical flows. These approximate flows, however, only work well in locally smooth regions and produce artifacts around motion boundaries. To address this shortcoming, we employ another U-Net to refine the approximated flow and also predict soft visibility maps. Finally, the two input images are warped and linearly fused to form each intermediate frame. By applying the visibility maps to the warped images before fusion, we exclude the contribution of occluded pixels to the interpolated intermediate frame to avoid artifacts. Since none of our learned network parameters are time-dependent, our approach is able to produce as many intermediate frames as needed. To train our network, we use 1,132 240-fps video clips, containing 300K individual video frames. Experimental results on several datasets, predicting different numbers of interpolated frames, demonstrate that our approach performs consistently better than existing methods.

649 citations


Journal ArticleDOI
TL;DR: BoltzTraP2 is a software package for calculating a smoothed Fourier expression of periodic functions and the Onsager transport coefficients for extended systems using the linearized Boltzmann transport equation within the relaxation time approximation.

624 citations


Journal ArticleDOI
TL;DR: A novel virtual array interpolation-based algorithm for coprime array direction-of-arrival (DOA) estimation using the Hermitian positive semi-definite Toeplitz condition and an atomic norm minimization problem with respect to the equivalent virtual measurement vector is formulated.
Abstract: Coprime arrays can achieve an increased number of degrees of freedom by deriving the equivalent signals of a virtual array. However, most existing methods fail to utilize all information received by the coprime array due to the non-uniformity of the derived virtual array, resulting in an inevitable estimation performance loss. To address this issue, we propose a novel virtual array interpolation-based algorithm for coprime array direction-of-arrival (DOA) estimation in this paper. The idea of array interpolation is employed to construct a virtual uniform linear array such that all virtual sensors in the non-uniform virtual array can be utilized, based on which the atomic norm of the second-order virtual array signals is defined. By investigating the properties of virtual domain atomic norm, it is proved that the covariance matrix of the interpolated virtual array is related to the virtual measurements under the Hermitian positive semi-definite Toeplitz condition. Accordingly, an atomic norm minimization problem with respect to the equivalent virtual measurement vector is formulated to reconstruct the interpolated virtual array covariance matrix in a gridless manner, where the reconstructed covariance matrix enables off-grid DOA estimation. Simulation results demonstrate the performance advantages of the proposed DOA estimation algorithm for coprime arrays.

394 citations


Proceedings ArticleDOI
18 Jun 2018
TL;DR: A context-aware synthesis approach that warps not only the input frames but also their pixel-wise contextual information and uses them to interpolate a high-quality intermediate frame and outperforms representative state-of-the-art approaches.
Abstract: Video frame interpolation algorithms typically estimate optical flow or its variations and then use it to guide the synthesis of an intermediate frame between two consecutive original frames. To handle challenges like occlusion, bidirectional flow between the two input frames is often estimated and used to warp and blend the input frames. However, how to effectively blend the two warped frames still remains a challenging problem. This paper presents a context-aware synthesis approach that warps not only the input frames but also their pixel-wise contextual information and uses them to interpolate a high-quality intermediate frame. Specifically, we first use a pre-trained neural network to extract per-pixel contextual information for input frames. We then employ a state-of-the-art optical flow algorithm to estimate bidirectional flow between them and pre-warp both input frames and their context maps. Finally, unlike common approaches that blend the pre-warped frames, our method feeds them and their context maps to a video frame synthesis neural network to produce the interpolated frame in a context-aware fashion. Our neural network is fully convolutional and is trained end to end. Our experiments show that our method can handle challenging scenarios such as occlusion and large motion and outperforms representative state-of-the-art approaches.

363 citations


Journal ArticleDOI
TL;DR: Progress is reported in graphics processing unit (GPU)-accelerated molecular dynamics and free energy methods in Amber, including free energy perturbation and thermodynamic integration methods with support for nonlinear soft-core potential and parameter interpolation transformation pathways.
Abstract: We report progress in graphics processing unit (GPU)-accelerated molecular dynamics and free energy methods in Amber18. Of particular interest is the development of alchemical free energy algorithms, including free energy perturbation and thermodynamic integration methods with support for nonlinear soft-core potential and parameter interpolation transformation pathways. These methods can be used in conjunction with enhanced sampling techniques such as replica exchange, constant-pH molecular dynamics, and new 12–6–4 potentials for metal ions. Additional performance enhancements have been made that enable appreciable speed-up on GPUs relative to the previous software release.

251 citations


Journal ArticleDOI
TL;DR: To develop a super‐resolution technique using convolutional neural networks for generating thin‐slice knee MR images from thicker input slices, and compare this method with alternative through‐plane interpolation methods.
Abstract: PURPOSE To develop a super-resolution technique using convolutional neural networks for generating thin-slice knee MR images from thicker input slices, and compare this method with alternative through-plane interpolation methods. METHODS We implemented a 3D convolutional neural network entitled DeepResolve to learn residual-based transformations between high-resolution thin-slice images and lower-resolution thick-slice images at the same center locations. DeepResolve was trained using 124 double echo in steady-state (DESS) data sets with 0.7-mm slice thickness and tested on 17 patients. Ground-truth images were compared with DeepResolve, clinically used tricubic interpolation, and Fourier interpolation methods, along with state-of-the-art single-image sparse-coding super-resolution. Comparisons were performed using structural similarity, peak SNR, and RMS error image quality metrics for a multitude of thin-slice downsampling factors. Two musculoskeletal radiologists ranked the 3 data sets and reviewed the diagnostic quality of the DeepResolve, tricubic interpolation, and ground-truth images for sharpness, contrast, artifacts, SNR, and overall diagnostic quality. Mann-Whitney U tests evaluated differences among the quantitative image metrics, reader scores, and rankings. Cohen's Kappa (κ) evaluated interreader reliability. RESULTS DeepResolve had significantly better structural similarity, peak SNR, and RMS error than tricubic interpolation, Fourier interpolation, and sparse-coding super-resolution for all downsampling factors (p < .05, except 4 × and 8 × sparse-coding super-resolution downsampling factors). In the reader study, DeepResolve significantly outperformed (p < .01) tricubic interpolation in all image quality categories and overall image ranking. Both readers had substantial scoring agreement (κ = 0.73). CONCLUSION DeepResolve was capable of resolving high-resolution thin-slice knee MRI from lower-resolution thicker slices, achieving superior quantitative and qualitative diagnostic performance to both conventionally used and state-of-the-art methods.

243 citations


Journal ArticleDOI
TL;DR: In this article, a convolutional neural network is trained on a collection of discrete, parameterizable fluid simulation velocity fields to synthesize fluid simulations from a set of reduced parameters.
Abstract: This paper presents a novel generative model to synthesize fluid simulations from a set of reduced parameters. A convolutional neural network is trained on a collection of discrete, parameterizable fluid simulation velocity fields. Due to the capability of deep learning architectures to learn representative features of the data, our generative model is able to accurately approximate the training data set, while providing plausible interpolated in-betweens. The proposed generative model is optimized for fluids by a novel loss function that guarantees divergence-free velocity fields at all times. In addition, we demonstrate that we can handle complex parameterizations in reduced spaces, and advance simulations in time by integrating in the latent space with a second network. Our method models a wide variety of fluid behaviors, thus enabling applications such as fast construction of simulations, interpolation of fluids with different parameters, time re-sampling, latent space simulations, and compression of fluid simulation data. Reconstructed velocity fields are generated up to 700x faster than re-simulating the data with the underlying CPU solver, while achieving compression rates of up to 1300x.

202 citations


Posted Content
TL;DR: This article proposed a regularization procedure which encourages interpolated outputs to appear more realistic by fooling a critic network which has been trained to recover the mixing coefficient from interpolated data, which significantly improves interpolation in this setting.
Abstract: Autoencoders provide a powerful framework for learning compressed representations by encoding all of the information needed to reconstruct a data point in a latent code. In some cases, autoencoders can "interpolate": By decoding the convex combination of the latent codes for two datapoints, the autoencoder can produce an output which semantically mixes characteristics from the datapoints. In this paper, we propose a regularization procedure which encourages interpolated outputs to appear more realistic by fooling a critic network which has been trained to recover the mixing coefficient from interpolated data. We then develop a simple benchmark task where we can quantitatively measure the extent to which various autoencoders can interpolate and show that our regularizer dramatically improves interpolation in this setting. We also demonstrate empirically that our regularizer produces latent codes which are more effective on downstream tasks, suggesting a possible link between interpolation abilities and learning useful representations.

200 citations


Journal ArticleDOI
TL;DR: Simulation results demonstrate that the proposed array interpolation-based DoA estimation algorithm achieves improved performance as compared to existing coarray-based DOA estimation algorithms in terms of the number of achievable degrees-of-freedom and estimation accuracy.
Abstract: In this letter, we propose a coprime array interpolation approach to provide an off-grid direction-of-arrival (DOA) estimation. Through array interpolation, a uniform linear array (ULA) with the same aperture is generated from the deterministic non-uniform coprime array. Taking the observed correlations calculated from the signals received at the coprime array, a gridless convex optimization problem is formulated to recover all the rows and columns of the unknown correlation matrix entries corresponding to the interpolated sensors. The optimized Hermitian positive semidefinite Toeplitz matrix functions as the covariance matrix of the interpolated ULA, which enables to resolve off-grid sources. Simulation results demonstrate that the proposed array interpolation-based DOA estimation algorithm achieves improved performance as compared to existing coarray-based DOA estimation algorithms in terms of the number of achievable degrees-of-freedom and estimation accuracy.

185 citations


Posted Content
TL;DR: A novel adaptive warping layer is developed to integrate both optical flow and interpolation kernels to synthesize target frame pixels and is fully differentiable such that both the flow and kernel estimation networks can be optimized jointly.
Abstract: Motion estimation (ME) and motion compensation (MC) have been widely used for classical video frame interpolation systems over the past decades. Recently, a number of data-driven frame interpolation methods based on convolutional neural networks have been proposed. However, existing learning based methods typically estimate either flow or compensation kernels, thereby limiting performance on both computational efficiency and interpolation accuracy. In this work, we propose a motion estimation and compensation driven neural network for video frame interpolation. A novel adaptive warping layer is developed to integrate both optical flow and interpolation kernels to synthesize target frame pixels. This layer is fully differentiable such that both the flow and kernel estimation networks can be optimized jointly. The proposed model benefits from the advantages of motion estimation and compensation methods without using hand-crafted features. Compared to existing methods, our approach is computationally efficient and able to generate more visually appealing results. Furthermore, the proposed MEMC-Net can be seamlessly adapted to several video enhancement tasks, e.g., super-resolution, denoising, and deblocking. Extensive quantitative and qualitative evaluations demonstrate that the proposed method performs favorably against the state-of-the-art video frame interpolation and enhancement algorithms on a wide range of datasets.

Proceedings Article
25 Jun 2018
TL;DR: This article showed that classical learning methods interpolating the training data can achieve optimal rates for the problems of nonparametric regression and prediction with square loss, and they also showed that linear regression can achieve the optimal rates with square losses.
Abstract: We show that classical learning methods interpolating the training data can achieve optimal rates for the problems of nonparametric regression and prediction with square loss.

Proceedings ArticleDOI
18 Jun 2018
TL;DR: In this article, a neural network decoder is proposed to directly estimate the phase decomposition of the intermediate frame, which is superior to the hand-crafted heuristics previously used in phase-based methods and also compares favorably to recent deep learning based approaches for video frame interpolation on challenging datasets.
Abstract: Most approaches for video frame interpolation require accurate dense correspondences to synthesize an in-between frame. Therefore, they do not perform well in challenging scenarios with e.g. lighting changes or motion blur. Recent deep learning approaches that rely on kernels to represent motion can only alleviate these problems to some extent. In those cases, methods that use a per-pixel phase-based motion representation have been shown to work well. However, they are only applicable for a limited amount of motion. We propose a new approach, PhaseNet, that is designed to robustly handle challenging scenarios while also coping with larger motion. Our approach consists of a neural network decoder that directly estimates the phase decomposition of the intermediate frame. We show that this is superior to the hand-crafted heuristics previously used in phase-based methods and also compares favorably to recent deep learning based approaches for video frame interpolation on challenging datasets.

Proceedings Article
13 Jun 2018
TL;DR: In this paper, the authors take a step toward a theoretical foundation for interpolated classifiers by analyzing local interpolating schemes, including geometric simplicial interpolation algorithm and singularly weighted $k$-nearest neighbor schemes.
Abstract: Many modern machine learning models are trained to achieve zero or near-zero training error in order to obtain near-optimal (but non-zero) test error. This phenomenon of strong generalization performance for ``overfitted'' / interpolated classifiers appears to be ubiquitous in high-dimensional data, having been observed in deep networks, kernel machines, boosting and random forests. Their performance is consistently robust even when the data contain large amounts of label noise. Very little theory is available to explain these observations. The vast majority of theoretical analyses of generalization allows for interpolation only when there is little or no label noise. This paper takes a step toward a theoretical foundation for interpolated classifiers by analyzing local interpolating schemes, including geometric simplicial interpolation algorithm and singularly weighted $k$-nearest neighbor schemes. Consistency or near-consistency is proved for these schemes in classification and regression problems. Moreover, the nearest neighbor schemes exhibit optimal rates under some standard statistical assumptions. Finally, this paper suggests a way to explain the phenomenon of adversarial examples, which are seemingly ubiquitous in modern machine learning, and also discusses some connections to kernel machines and random forests in the interpolated regime.

Posted Content
TL;DR: In this paper, a context-aware video frame interpolation method is proposed that warps not only the input frames but also their pixel-wise contextual information and uses them to interpolate a high-quality intermediate frame.
Abstract: Video frame interpolation algorithms typically estimate optical flow or its variations and then use it to guide the synthesis of an intermediate frame between two consecutive original frames. To handle challenges like occlusion, bidirectional flow between the two input frames is often estimated and used to warp and blend the input frames. However, how to effectively blend the two warped frames still remains a challenging problem. This paper presents a context-aware synthesis approach that warps not only the input frames but also their pixel-wise contextual information and uses them to interpolate a high-quality intermediate frame. Specifically, we first use a pre-trained neural network to extract per-pixel contextual information for input frames. We then employ a state-of-the-art optical flow algorithm to estimate bidirectional flow between them and pre-warp both input frames and their context maps. Finally, unlike common approaches that blend the pre-warped frames, our method feeds them and their context maps to a video frame synthesis neural network to produce the interpolated frame in a context-aware fashion. Our neural network is fully convolutional and is trained end to end. Our experiments show that our method can handle challenging scenarios such as occlusion and large motion and outperforms representative state-of-the-art approaches.

Journal ArticleDOI
TL;DR: A novel single-image super-resolution procedure, which upscales a given low-resolution input image to a high-resolution image while preserving the textural and structural information, and develops a single- image SR algorithm based on the proposed model.
Abstract: This paper presents a novel single-image super-resolution (SR) procedure, which upscales a given low-resolution (LR) input image to a high-resolution image while preserving the textural and structural information. First, we construct a new type of bivariate rational fractal interpolation model and investigate its analytical properties. This model has different forms of expression with various values of the scaling factors and shape parameters; thus, it can be employed to better describe image features than current interpolation schemes. Furthermore, this model combines the advantages of rational interpolation and fractal interpolation, and its effectiveness is validated through theoretical analysis. Second, we develop a single-image SR algorithm based on the proposed model. The LR input image is divided into texture and non-texture regions, and then, the image is interpolated according to the characteristics of the local structure. Specifically, in the texture region, the scaling factor calculation is the critical step. We present a method to accurately calculate scaling factors based on local fractal analysis. Extensive experiments and comparisons with the other state-of-the-art methods show that our algorithm achieves competitive performance, with finer details and sharper edges.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a general and effective approach to solve the interpolation, prediction, and feature analysis of fine-gained air quality in one model called the Deep Air Learning (DAL).
Abstract: The interpolation, prediction, and feature analysis of fine-gained air quality are three important topics in the area of urban air computing. The solutions to these topics can provide extremely useful information to support air pollution control, and consequently generate great societal and technical impacts. Most of the existing work solves the three problems separately by different models. In this paper, we propose a general and effective approach to solve the three problems in one model called the Deep Air Learning (DAL). The main idea of DAL lies in embedding feature selection and semi-supervised learning in different layers of the deep learning network. The proposed approach utilizes the information pertaining to the unlabeled spatio-temporal data to improve the performance of the interpolation and the prediction, and performs feature selection and association analysis to reveal the main relevant features to the variation of the air quality. We evaluate our approach with extensive experiments based on real data sources obtained in Beijing, China. Experiments show that DAL is superior to the peer models from the recent literature when solving the topics of interpolation, prediction, and feature analysis of fine-gained air quality.

Journal ArticleDOI
02 May 2018-PLOS ONE
TL;DR: This paper investigates how to estimate and recover the end-to-end network traffic matrix in fine time granularity from the sampled traffic traces, which is a hard inverse problem.
Abstract: An end-to-end network traffic matrix is significantly helpful for network management and for Software Defined Networks (SDN). However, the end-to-end network traffic matrix's inferences and estimations are a challenging problem. Moreover, attaining the traffic matrix in high-speed networks for SDN is a prohibitive challenge. This paper investigates how to estimate and recover the end-to-end network traffic matrix in fine time granularity from the sampled traffic traces, which is a hard inverse problem. Different from previous methods, the fractal interpolation is used to reconstruct the finer-granularity network traffic. Then, the cubic spline interpolation method is used to obtain the smooth reconstruction values. To attain an accurate the end-to-end network traffic in fine time granularity, we perform a weighted-geometric-average process for two interpolation results that are obtained. The simulation results show that our approaches are feasible and effective.

Posted Content
TL;DR: Wang et al. as discussed by the authors proposed a fully data-driven deep learning algorithm for k-space interpolation, which can be also easily applied to non-Cartesian K-space trajectories by adding an additional regridding layer.
Abstract: The annihilating filter-based low-rank Hankel matrix approach (ALOHA) is one of the state-of-the-art compressed sensing approaches that directly interpolates the missing k-space data using low-rank Hankel matrix completion. The success of ALOHA is due to the concise signal representation in the k-space domain thanks to the duality between structured low-rankness in the k-space domain and the image domain sparsity. Inspired by the recent mathematical discovery that links convolutional neural networks to Hankel matrix decomposition using data-driven framelet basis, here we propose a fully data-driven deep learning algorithm for k-space interpolation. Our network can be also easily applied to non-Cartesian k-space trajectories by simply adding an additional regridding layer. Extensive numerical experiments show that the proposed deep learning method consistently outperforms the existing image-domain deep learning approaches.

Proceedings Article
01 Jan 2018
TL;DR: It is shown that in the limit that the number of parameters $n$ is large, the landscape of the mean-squared error becomes convex and the representation error in the function scales as $O(n^{-1})$.
Abstract: The performance of neural networks on high-dimensional data distributions suggests that it may be possible to parameterize a representation of a given high-dimensional function with controllably small errors, potentially outperforming standard interpolation methods. We demonstrate, both theoretically and numerically, that this is indeed the case. We map the parameters of a neural network to a system of particles relaxing with an interaction potential determined by the loss function. We show that in the limit that the number of parameters $n$ is large, the landscape of the mean-squared error becomes convex and the representation error in the function scales as $O(n^{-1})$. In this limit, we prove a dynamical variant of the universal approximation theorem showing that the optimal representation can be attained by stochastic gradient descent, the algorithm ubiquitously used for parameter optimization in machine learning. In the asymptotic regime, we study the fluctuations around the optimal representation and show that they arise at a scale $O(n^{-1})$. These fluctuations in the landscape identify the natural scale for the noise in stochastic gradient descent. Our results apply to both single and multi-layer neural networks, as well as standard kernel methods like radial basis functions.

Journal ArticleDOI
30 Jun 2018
TL;DR: In this paper, the authors revisited the well-known fixed point theorem of Kannan under the aspect of interpolation and proposed a new kannan type contraction to maximize the rate of convergence.
Abstract: In the paper, we revisited the well-known fixed point theorem of Kannan under the aspect of interpolation. By using the interpolation notion, we propose a new Kannan type contraction to maximize the rate of convergence.

Journal ArticleDOI
TL;DR: In this paper, the uncertainty of three types of independent precipitation products, i.e., satellite-based, ground-based and model reanalysis over Mainland China using the Triple Collocation (TC) method is evaluated.

Journal ArticleDOI
TL;DR: The experimental results validate the feasibility of the proposed TDOA scheme, and an average positioning accuracy of 9.2 cm is achieved with a sampling rate of 500 MSa/s, an interpolation factor of 100 and a data length of 250 k samples.
Abstract: In this paper, a low-complexity time-difference-of-arrival (TDOA) based indoor visible light positioning (VLP) system using an enhanced practical localization scheme based on cross correlation is proposed and experimentally demonstrated. The proposed TDOA scheme offers two advantages: 1) the use of virtual local oscillator to replace the real local oscillator for cross correlation at the receiver side so as to reduce the hardware complexity; 2) the application of cubic spline interpolation on the correlation function to reduce the rigorous requirement on the sampling rate and to enhance the time-resolution of cross correlation. In order to achieve the high positioning accuracy with minimum implementation complexity, parameter optimization is first performed in terms of sampling rate, interpolation factor, and data length for correlation. Using the obtained optimal parameters, we demonstrate a low-complexity indoor two-dimensional VLP system using the correlation-based TDOA scheme in a coverage area of 1.2 $\times$ 1.2 m $^{2}$ with a height of 2 m. The experimental results validate the feasibility of the proposed TDOA scheme, and an average positioning accuracy of 9.2 cm is achieved with a sampling rate of 500 MSa/s, an interpolation factor of 100 and a data length of 250 k samples.

Journal ArticleDOI
TL;DR: In this article, a new generalize numerical scheme for simulating variable-order fractional differential operators with power-law, exponential-law and Mittag-Leffler kernel is proposed.
Abstract: Variable-order differential operators can be employed as a powerful tool to modeling nonlinear fractional differential equations and chaotical systems. In this paper, we propose a new generalize numerical schemes for simulating variable-order fractional differential operators with power-law, exponential-law and Mittag-Leffler kernel. The numerical schemes are based on the fundamental theorem of fractional calculus and the Lagrange polynomial interpolation. These schemes were applied to simulate the chaotic financial system and memcapacitor-based circuit chaotic oscillator. Numerical examples are presented to show the applicability and efficiency of this novel method.

Journal ArticleDOI
22 Dec 2018-Symmetry
TL;DR: By using an interpolative approach, the Hardy-Rogers fixed point theorem in the class of metric spaces is recognized by using the interpolative method and the obtained result is supported by some examples.
Abstract: By using an interpolative approach, we recognize the Hardy-Rogers fixed point theorem in the class of metric spaces. The obtained result is supported by some examples. We also give the partial metric case, according to our result.

Journal ArticleDOI
TL;DR: In the course of the study, methods for correcting and analyzing spatial data recorded in a vector format are disclosed, which is best suited for spatial analysis of discrete objects.
Abstract: In the course of the study, we have disclosed methods for correcting and analyzing spatial data recorded in a vector format. The latter is best suited for spatial analysis of discrete objects. However, in the case when the spatial variable is represented as a field of scalar or vector magnitudes (for example, spatial concentration distribution of concentrations of heavy metals in soils or the velocity field of groundwater movement). Convenient ways of data recording is a raster format. This approach is most often used for phenomena of processes that are characterized by significant anisotropy. However, the characteristic feature of the inverse distance method is the fact that the interpolated value at the measured point is equal to the measured value.

Proceedings ArticleDOI
01 Aug 2018
TL;DR: The adversarial autoencoder (AAE) is introduced to impose the feature representations with uniform distribution and apply the linear interpolation on latent space, which has potential to generate a much broader set of augmentations for image classification.
Abstract: Effective training of the deep neural networks requires much data to avoid underdetermined and poor generalization. Data Augmentation alleviates this by using existing data more effectively. However standard data augmentation produces only limited plausible alternative data by for example, flipping, distorting, adding noise to, cropping a patch from the original samples. In this paper, we introduce the adversarial autoencoder (AAE) to impose the feature representations with uniform distribution and apply the linear interpolation on latent space, which is potential to generate a much broader set of augmentations for image classification. As a possible “recognition via generation” framework, it has potentials for several other classification tasks. Our experiments on the ILSVRC 2012, CIFAR-10 datasets show that the latent space interpolation (LSI) improves the generalization and performance of state-of-the-art deep neural networks.


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a novel dense subpixel disparity estimation algorithm with high computational efficiency and robustness, which first transforms the perspective view of the target frame into the reference view, which not only increases the accuracy of the block matching for the road surface but also improves the processing speed.
Abstract: Various 3D reconstruction methods have enabled civil engineers to detect damage on a road surface. To achieve the millimeter accuracy required for road condition assessment, a disparity map with subpixel resolution needs to be used. However, none of the existing stereo matching algorithms are specially suitable for the reconstruction of the road surface. Hence in this paper, we propose a novel dense subpixel disparity estimation algorithm with high computational efficiency and robustness. This is achieved by first transforming the perspective view of the target frame into the reference view, which not only increases the accuracy of the block matching for the road surface but also improves the processing speed. The disparities are then estimated iteratively using our previously published algorithm, where the search range is propagated from three estimated neighboring disparities. Since the search range is obtained from the previous iteration, errors may occur when the propagated search range is not sufficient. Therefore, a correlation maxima verification is performed to rectify this issue, and the subpixel resolution is achieved by conducting a parabola interpolation enhancement. Furthermore, a novel disparity global refinement approach developed from the Markov random fields and fast bilateral stereo is introduced to further improve the accuracy of the estimated disparity map, where disparities are updated iteratively by minimizing the energy function that is related to their interpolated correlation polynomials. The algorithm is implemented in C language with a near real-time performance. The experimental results illustrate that the absolute error of the reconstruction varies from 0.1 to 3 mm.

Journal ArticleDOI
TL;DR: In this article, the conditional multivariate normal (MVN) distribution is applied to the problem of ground motion estimation following a significant earthquake, where ground-motion observations are available for a limited set of locations and intensity measures (IMs).
Abstract: Following a significant earthquake, ground-motion observations are available for a limited set of locations and intensity measures (IMs). Typically, however, it is desirable to know the ground motions for additional IMs and at locations where observations are unavailable. Various interpolation methods are available, but because IMs or their logarithms are normally distributed, spatially correlated, and correlated with each other at a given location, it is possible to apply the conditional multivariate normal (MVN) distribution to the problem of estimating unobserved IMs. In this article, we review the MVN and its application to general estimation problems, and then apply the MVN to the specific problem of ground-motion IM interpolation. In particular, we present (1) a formulation of the MVN for the simultaneous interpolation of IMs across space and IM type (most commonly, spectral response at different oscillator periods) and (2) the inclusion of uncertain observation data in the MVN formulation. These techniques, in combination with modern empirical ground-motion models and correlation functions, provide a flexible framework for estimating a variety of IMs at arbitrary locations. Electronic Supplement: Demonstration Python script for the evaluation of the multivariate normal (MVN) with additional uncertainty.