scispace - formally typeset
Search or ask a question

Showing papers by "Courant Institute of Mathematical Sciences published in 2017"


Proceedings Article
17 Jul 2017
TL;DR: This work introduces a new algorithm named WGAN, an alternative to traditional GAN training that can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches.
Abstract: We introduce a new algorithm named WGAN, an alternative to traditional GAN training. In this new model, we show that we can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches. Furthermore, we show that the corresponding optimization problem is sound, and provide extensive theoretical work highlighting the deep connections to different distances between distributions.

5,667 citations


Posted Content
TL;DR: This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.
Abstract: Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only low-quality samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models over discrete data. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.

4,133 citations


Proceedings Article
04 Dec 2017
TL;DR: The authors proposed to penalize the norm of the gradient of the critic with respect to its input to improve the training stability of Wasserstein GANs and achieve stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.
Abstract: Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only poor samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models with continuous generators. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.

3,622 citations



Posted Content
TL;DR: This article used a combination of known tricks that are rarely used together to train pre-trained word vector representations and achieved state-of-the-art performance on a number of NLP tasks.
Abstract: Many Natural Language Processing applications nowadays rely on pre-trained word representations estimated from large text corpora such as news collections, Wikipedia and Web Crawl. In this paper, we show how to train high-quality word vector representations by using a combination of known tricks that are however rarely used together. The main result of our work is the new set of publicly available pre-trained models that outperform the current state of the art by a large margin on a number of tasks.

784 citations


Posted Content
TL;DR: This paper proposed using adversarial training for open-domain dialogue generation, where the generator is trained to generate sequences that are indistinguishable from human-generated dialogue utterances, and the outputs from the discriminator are used as rewards for the generator.
Abstract: In this paper, drawing intuition from the Turing test, we propose using adversarial training for open-domain dialogue generation: the system is trained to produce sequences that are indistinguishable from human-generated dialogue utterances. We cast the task as a reinforcement learning (RL) problem where we jointly train two systems, a generative model to produce response sequences, and a discriminator---analagous to the human evaluator in the Turing test--- to distinguish between the human-generated dialogues and the machine-generated ones. The outputs from the discriminator are then used as rewards for the generative model, pushing the system to generate dialogues that mostly resemble human dialogues. In addition to adversarial training we describe a model for adversarial {\em evaluation} that uses success in fooling an adversary as a dialogue evaluation metric, while avoiding a number of potential pitfalls. Experimental results on several metrics, including adversarial evaluation, demonstrate that the adversarially-trained system generates higher-quality responses than previous baselines.

645 citations


Proceedings ArticleDOI
23 Jan 2017
TL;DR: This work applies adversarial training to open-domain dialogue generation, training a system to produce sequences that are indistinguishable from human-generated dialogue utterances, and investigates models for adversarial evaluation that uses success in fooling an adversary as a dialogue evaluation metric, while avoiding a number of potential pitfalls.
Abstract: We apply adversarial training to open-domain dialogue generation, training a system to produce sequences that are indistinguishable from human-generated dialogue utterances. We cast the task as a reinforcement learning problem where we jointly train two systems: a generative model to produce response sequences, and a discriminator—analagous to the human evaluator in the Turing test— to distinguish between the human-generated dialogues and the machine-generated ones. In this generative adversarial network approach, the outputs from the discriminator are used to encourage the system towards more human-like dialogue. Further, we investigate models for adversarial evaluation that uses success in fooling an adversary as a dialogue evaluation metric, while avoiding a number of potential pitfalls. Experimental results on several metrics, including adversarial evaluation, demonstrate that the adversarially-trained system generates higher-quality responses than previous baselines

644 citations


Proceedings Article
26 Dec 2017
TL;DR: The authors used a combination of known tricks that are rarely used together to train pre-trained word vector representations and achieved state-of-the-art performance on a number of NLP tasks.
Abstract: Many Natural Language Processing applications nowadays rely on pre-trained word representations estimated from large text corpora such as news collections, Wikipedia and Web Crawl. In this paper, we show how to train high-quality word vector representations by using a combination of known tricks that are however rarely used together. The main result of our work is the new set of publicly available pre-trained models that outperform the current state of the art by a large margin on a number of tasks.

481 citations


Journal ArticleDOI
13 Sep 2017-Neuron
TL;DR: An automated clustering approach and associated software package that has the potential to enable reproducible and automated spike sorting of larger scale recordings than is currently possible and has accuracy comparable to or exceeding that achieved using manual or semi-manual techniques.

349 citations


Journal ArticleDOI
06 Apr 2017-PLOS ONE
TL;DR: Compared to previous work that only used structured data such as vital signs and demographic information, utilizing free text drastically improves the discriminatory ability of identifying infection.
Abstract: Objective To demonstrate the incremental benefit of using free text data in addition to vital sign and demographic data to identify patients with suspected infection in the emergency department. Methods This was a retrospective, observational cohort study performed at a tertiary academic teaching hospital. All consecutive ED patient visits between 12/17/08 and 2/17/13 were included. No patients were excluded. The primary outcome measure was infection diagnosed in the emergency department defined as a patient having an infection related ED ICD-9-CM discharge diagnosis. Patients were randomly allocated to train (64%), validate (20%), and test (16%) data sets. After preprocessing the free text using bigram and negation detection, we built four models to predict infection, incrementally adding vital signs, chief complaint, and free text nursing assessment. We used two different methods to represent free text: a bag of words model and a topic model. We then used a support vector machine to build the prediction model. We calculated the area under the receiver operating characteristic curve to compare the discriminatory power of each model. Results A total of 230,936 patient visits were included in the study. Approximately 14% of patients had the primary outcome of diagnosed infection. The area under the ROC curve (AUC) for the vitals model, which used only vital signs and demographic data, was 0.67 for the training data set, 0.67 for the validation data set, and 0.67 (95% CI 0.65–0.69) for the test data set. The AUC for the chief complaint model which also included demographic and vital sign data was 0.84 for the training data set, 0.83 for the validation data set, and 0.83 (95% CI 0.81–0.84) for the test data set. The best performing methods made use of all of the free text. In particular, the AUC for the bag-of-words model was 0.89 for training data set, 0.86 for the validation data set, and 0.86 (95% CI 0.85–0.87) for the test data set. The AUC for the topic model was 0.86 for the training data set, 0.86 for the validation data set, and 0.85 (95% CI 0.84–0.86) for the test data set. Conclusion Compared to previous work that only used structured data such as vital signs and demographic information, utilizing free text drastically improves the discriminatory ability (increase in AUC from 0.67 to 0.86) of identifying infection.

220 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show that motile structures can form spontaneously from hydrodynamic interactions alone, with no sensing or potential interactions, in a system of colloidal rollers suspended and translating above a floor, using both experiments and large-scale 3D simulations.
Abstract: Collections of rolling colloids are shown to pinch off into motile clusters resembling droplets sliding down a windshield. These stable dynamic structures are formed through a fingering instability that relies on hydrodynamic interactions alone. Condensation of objects into stable clusters occurs naturally in equilibrium1 and driven systems2,3,4,5. It is commonly held that potential interactions6, depletion forces7, or sensing8 are the only mechanisms which can create long-lived compact structures. Here we show that persistent motile structures can form spontaneously from hydrodynamic interactions alone, with no sensing or potential interactions. We study this structure formation in a system of colloidal rollers suspended and translating above a floor, using both experiments and large-scale three-dimensional simulations. In this system, clusters originate from a previously unreported fingering instability, where fingers pinch off from an unstable front to form autonomous ‘critters’, whose size is selected by the height of the particles above the floor. These critters are a stable state of the system, move much faster than individual particles, and quickly respond to a changing drive. With speed and direction set by a rotating magnetic field, these active structures offer interesting possibilities for guided transport, flow generation, and mixing at the microscale.

Proceedings Article
21 Apr 2017
TL;DR: In this article, a local-entropy-based objective function is proposed for training deep neural networks that is motivated by the local geometry of the energy landscape, where the gradient of the local entropy is computed before each update of the weights.
Abstract: This paper proposes a new optimization algorithm called Entropy-SGD for training deep neural networks that is motivated by the local geometry of the energy landscape. Local extrema with low generalization error have a large proportion of almost-zero eigenvalues in the Hessian with very few positive or negative eigenvalues. We leverage upon this observation to construct a local-entropy-based objective function that favors well-generalizable solutions lying in large flat regions of the energy landscape, while avoiding poorly-generalizable solutions located in the sharp valleys. Conceptually, our algorithm resembles two nested loops of SGD where we use Langevin dynamics in the inner loop to compute the gradient of the local entropy before each update of the weights. We show that the new objective has a smoother energy landscape and show improved generalization over SGD using uniform stability, under certain assumptions. Our experiments on convolutional and recurrent networks demonstrate that Entropy-SGD compares favorably to state-of-the-art techniques in terms of generalization error and training time.

Journal ArticleDOI
TL;DR: In this paper, a new treatment of the theory of mean convex flows with surgery is presented, based on the noncollapsing result of Andrews and Huisken-Sinestrari.
Abstract: In the last 15 years, White and Huisken-Sinestrari developed a far-reaching structure theory for the mean curvature flow of mean convex hypersurfaces. Their papers provide a package of estimates and structural results that yield a precise description of singularities and of high-curvature regions in a mean convex flow. In the present paper, we give a new treatment of the theory of mean convex (and k-convex) flows. This includes: (1) an estimate for derivatives of curvatures, (2) a convexity estimate, (3) a cylindrical estimate, (4) a global convergence theorem, (5) a structure theorem for ancient solutions, and (6) a partial regularity theorem. Our new proofs are both more elementary and substantially shorter than the original arguments. Our estimates are local and universal. A key ingredient in our new approach is the new noncollapsing result of Andrews [2]. Some parts are also inspired by the work of Perelman [32,33]. In a forthcoming paper [17], we will give a new construction of mean curvature flow with surgery based on the methods established in the present paper. Note added in May 2015. Since the first version of this paper was posted on arxiv in April 2013, the estimates have been used to construct mean convex flow with surgery in ℝ3 by Brendle and Huisken [5] in September 2013 and in another paper by the authors in April 2014.© 2016 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: In this paper, the Sobolev regularity disturbances to the periodic, plane Couette flow in the 3D incompressible Navier-Stokes equations at high Reynolds number were studied.
Abstract: We study Sobolev regularity disturbances to the periodic, plane Couette flow in the 3D incompressible Navier-Stokes equations at high Reynolds number $\textbf{Re}$. Our goal is to estimate how the stability threshold scales in $\textbf{Re}$: the largest the initial perturbation can be while still resulting in a solution that does not transition away from Couette flow. In this work we prove that initial data which satisfies $\| u_{in} \|_{H^\sigma} \leq \delta\textbf{Re}^{-3/2}$ for any $\sigma > 9/2$ and some $\delta = \delta(\sigma) > 0$ depending only on $\sigma$, is global in time, remains within $O(\textbf{Re}^{-1/2})$ of the Couette flow in $L^2$ for all time, and converges to the class of "2.5 dimensional" streamwise-independent solutions referred to as streaks for times $t \gtrsim \textbf{Re}^{1/3}$. Numerical experiments performed by Reddy et. al. with "rough" initial data estimated a threshold of $\sim \textbf{Re}^{-31/20}$, which shows very close agreement with our estimate.

Journal ArticleDOI
TL;DR: The first large-scale serial electron tomography of whole mitotic spindles in early C. elegans embryos with live-cell imaging and quantitatively analysing several models of microtubule growth concludes that minus-ends of KMTs have selectively detached and depolymerized from the centrosome.
Abstract: The mitotic spindle ensures the faithful segregation of chromosomes. Here we combine the first large-scale serial electron tomography of whole mitotic spindles in early C. elegans embryos with live-cell imaging to reconstruct all microtubules in 3D and identify their plus- and minus-ends. We classify them as kinetochore (KMTs), spindle (SMTs) or astral microtubules (AMTs) according to their positions, and quantify distinct properties of each class. While our light microscopy and mutant studies show that microtubules are nucleated from the centrosomes, we find only a few KMTs directly connected to the centrosomes. Indeed, by quantitatively analysing several models of microtubule growth, we conclude that minus-ends of KMTs have selectively detached and depolymerized from the centrosome. In toto, our results show that the connection between centrosomes and chromosomes is mediated by an anchoring into the entire spindle network and that any direct connections through KMTs are few and likely very transient.

Journal ArticleDOI
TL;DR: It is demonstrated that a single pulsed macrolide antibiotic treatment (PAT) course early in life is sufficient to lead to durable alterations to the murine intestinal microbiota, ileal gene expression, specific intestinal T-cell populations, and secretory IgA expression.
Abstract: Broad-spectrum antibiotics are frequently prescribed to children. Early childhood represents a dynamic period for the intestinal microbial ecosystem, which is readily shaped by environmental cues; antibiotic-induced disruption of this sensitive community may have long-lasting host consequences. Here we demonstrate that a single pulsed macrolide antibiotic treatment (PAT) course early in life is sufficient to lead to durable alterations to the murine intestinal microbiota, ileal gene expression, specific intestinal T-cell populations, and secretory IgA expression. A PAT-perturbed microbial community is necessary for host effects and sufficient to transfer delayed secretory IgA expression. Additionally, early-life antibiotic exposure has lasting and transferable effects on microbial community network topology. Our results indicate that a single early-life macrolide course can alter the microbiota and modulate host immune phenotypes that persist long after exposure has ceased. High or multiple doses of macrolide antibiotics, when given early in life, can perturb the metabolic and immunological development of lab mice. Here, Ruiz et al. show that even a single macrolide course, given early in life, leads to long-lasting changes in the gut microbiota and immune system of mice.

Journal ArticleDOI
TL;DR: The proposed algorithm is a sequential quadratic optimization method that employs Broyden-Fletcher-Goldfarb-Shanno quasi-Newton Hessian approximations and an exact penalty function whose parameter is controlled using a steering strategy.
Abstract: We propose an algorithm for solving nonsmooth, nonconvex, constrained optimization problems as well as a new set of visualization tools for comparing the performance of optimization algorithms. Our algorithm is a sequential quadratic optimization method that employs Broyden-Fletcher-Goldfarb-Shanno BFGS quasi-Newton Hessian approximations and an exact penalty function whose parameter is controlled using a steering strategy. While our method has no convergence guarantees, we have found it to perform very well in practice on challenging test problems in controller design involving both locally Lipschitz and non-locally-Lipschitz objective and constraint functions with constraints that are typically active at local minimizers. In order to empirically validate and compare our method with available alternatives—on a new test set of 200 problems of varying sizes—we employ new visualization tools which we call relative minimization profiles. Such profiles are designed to simultaneously assess the relative performance of several algorithms with respect to objective quality, feasibility, and speed of progress, highlighting the trade-offs between these measures when comparing algorithm performance.

Proceedings Article
06 Aug 2017
TL;DR: The results demonstrate that the AdaNet algorithm can automatically learn network structures with very competitive performance accuracies when compared with those achieved for neural networks found by standard approaches.
Abstract: We present new algorithms for adaptively learning artificial neural networks. Our algorithms (ADA NET) adaptively learn both the structure of the network and its weights. They are based on a solid theoretical analysis, including data-dependent generalization guarantees that we prove and discuss in detail. We report the results of large-scale experiments with one of our algorithms on several binary classification tasks extracted from the CIFAR-10 dataset and on the Criteo dataset. The results demonstrate that our algorithm can automatically learn network structures with very competitive performance accuracies when compared with those achieved by neural networks found by standard approaches.

Journal ArticleDOI
TL;DR: A new method for sampling stochastic displacements in Brownian Dynamics (BD) simulations of colloidal scale particles, which circumvents the super-linear scaling exhibited by all known iterative sampling methods applied directly to the RPY tensor and scales linearly with the number of particles.
Abstract: We present a new method for sampling stochastic displacements in Brownian Dynamics (BD) simulations of colloidal scale particles. The method relies on a new formulation for Ewald summation of the Rotne-Prager-Yamakawa (RPY) tensor, which guarantees that the real-space and wave-space contributions to the tensor are independently symmetric and positive-definite for all possible particle configurations. Brownian displacements are drawn from a superposition of two independent samples: a wave-space (far-field or long-ranged) contribution, computed using techniques from fluctuating hydrodynamics and non-uniform fast Fourier transforms; and a real-space (near-field or short-ranged) correction, computed using a Krylov subspace method. The combined computational complexity of drawing these two independent samples scales linearly with the number of particles. The proposed method circumvents the super-linear scaling exhibited by all known iterative sampling methods applied directly to the RPY tensor that results from the power law growth of the condition number of tensor with the number of particles. For geometrically dense microstructures (fractal dimension equal three), the performance is independent of volume fraction, while for tenuous microstructures (fractal dimension less than three), such as gels and polymer solutions, the performance improves with decreasing volume fraction. This is in stark contrast with other related linear-scaling methods such as the force coupling method and the fluctuating immersed boundary method, for which performance degrades with decreasing volume fraction. Calculations for hard sphere dispersions and colloidal gels are illustrated and used to explore the role of microstructure on performance of the algorithm. In practice, the logarithmic part of the predicted scaling is not observed and the algorithm scales linearly for up to 4×106 particles, obtaining speed ups of over an order of magnitude over existing iterative methods, and making the cost of computing Brownian displacements comparable to the cost of computing deterministic displacements in BD simulations. A high-performance implementation employing non-uniform fast Fourier transforms implemented on graphics processing units and integrated with the software package HOOMD-blue is used for benchmarking.

Journal ArticleDOI
TL;DR: In this paper, a rich polymorphism of coumarin grown from the melt was identified and their crystal structures were solved using a combination of computational crystal structure prediction algorithms and X-ray powder diffraction.
Abstract: Coumarin, a simple, commodity chemical isolated from beans in 1820, has, to date, only yielded one solid state structure. Here, we report a rich polymorphism of coumarin grown from the melt. Four new metastable forms were identified and their crystal structures were solved using a combination of computational crystal structure prediction algorithms and X-ray powder diffraction. With five crystal structures, coumarin has become one of the few rigid molecules showing extensive polymorphism at ambient conditions. We demonstrate the crucial role of advanced electronic structure calculations including many-body dispersion effects for accurate ranking of the stability of coumarin polymorphs and the need to account for anharmonic vibrational contributions to their free energy. As such, coumarin is a model system for studying weak intermolecular interactions, crystallization mechanisms, and kinetic effects.

Journal ArticleDOI
TL;DR: This work is the first technique to include many-body hydrodynamic interactions (HIs), and the resulting fluid flows, in cellular assemblies of flexible fibers, and uses a pseudo-spectral representation of fiber positions and implicit time-stepping to resolve large fiber deformations.

Proceedings Article
01 Jan 2017
TL;DR: In this paper, the authors show that the pointwise nonlinearities typically applied in neural networks can be incorporated into a standard method of proof in random matrix theory known as the moments method, and apply these results to the computation of the asymptotic performance of single-layer random feature methods on a memorization task.
Abstract: Neural network configurations with random weights play an important role in the analysis of deep learning. They define the initial loss landscape and are closely related to kernel and random feature methods. Despite the fact that these networks are built out of random matrices, the vast and powerful machinery of random matrix theory has so far found limited success in studying them. A main obstacle in this direction is that neural networks are nonlinear, which prevents the straightforward utilization of many of the existing mathematical results. In this work, we open the door for direct applications of random matrix theory to deep learning by demonstrating that the pointwise nonlinearities typically applied in neural networks can be incorporated into a standard method of proof in random matrix theory known as the moments method. The test case for our study is the Gram matrix $Y^TY$, $Y=f(WX)$, where $W$ is a random weight matrix, $X$ is a random data matrix, and $f$ is a pointwise nonlinear activation function. We derive an explicit representation for the trace of the resolvent of this matrix, which defines its limiting spectral distribution. We apply these results to the computation of the asymptotic performance of single-layer random feature methods on a memorization task and to the analysis of the eigenvalues of the data covariance matrix as it propagates through a neural network. As a byproduct of our analysis, we identify an intriguing new class of activation functions with favorable properties.

Journal ArticleDOI
TL;DR: Chatterjee and Dembo as discussed by the authors considered the problem of estimating the probability that the number of triangles in the Erdi¾?s-Renyi random graph with edge density p is at least twice its mean.
Abstract: What is the probability that the number of triangles in Gn,p, the Erdi¾?s-Renyi random graph with edge density p, is at least twice its mean? Writing it as exp[-rn,p], already the order of the rate function rn, p was a longstanding open problem when p=o1, finally settled in 2012 by Chatterjee and by DeMarco and Kahn, who independently showed that rn,pi¾?n2p2log1/p for pi¾?lognn; the exact asymptotics of rn, p remained unknown. The following variational problem can be related to this large deviation question at pi¾?lognn: for i¾?>0 fixed, what is the minimum asymptotic p-relative entropy of a weighted graph on n vertices with triangle density at least 1+i¾?p3? A beautiful large deviation framework of Chatterjee and Varadhan 2011 reduces upper tails for triangles to a limiting version of this problem for fixed p. A very recent breakthrough of Chatterjee and Dembo extended its validity to n-αi¾?pi¾?1 for an explicit α>0, and plausibly it holds in all of the above sparse regime.

Journal ArticleDOI
TL;DR: In this article, the inviscid limit of the Navier-Stokes equations with free boundary was studied and the existence of solutions on a uniform time interval was proved by using a suitable functional framework based on Sobolev conormal spaces.
Abstract: We study the inviscid limit of the free boundary Navier–Stokes equations. We prove the existence of solutions on a uniform time interval by using a suitable functional framework based on Sobolev conormal spaces. This allows us to use a strong compactness argument to justify the inviscid limit. Our approach does not rely on the justification of asymptotic expansions. In particular, we get a new existence result for the Euler equations with free surface from the one for Navier–Stokes.

Journal ArticleDOI
TL;DR: In this paper, the authors prove global regularity for the full water-wave system in 3D for small data, under the influence of both gravity and surface tension, using a combination of energy estimates and matching dispersive estimates.
Abstract: In this paper we prove global regularity for the full water-wave system in three dimensions for small data, under the influence of both gravity and surface tension. This problem presents essential difficulties which were absent in all of the earlier global regularity results for other water-wave models. To construct global solutions, we use a combination of energy estimates and matching dispersive estimates. There is a significant new difficulty in proving energy estimates in our problem, namely the combination of slow pointwise decay of solutions (no better than ${\lvert t \rvert}^{- 5/6}$) and the presence of a large, codimension-$1$, set of quadratic time-resonances. To deal with such a situation, we propose here a new mechanism, which exploits a non-degeneracy property of the time-resonant hypersurfaces and some special structure of the quadratic part of the non-linearity, connected to the conserved energy of the system. The dispersive estimates rely on analysis of the Duhamel formula in the Fourier space. The main contributions come from the set of space-time resonances, which is a large set of dimension $1$. To control the corresponding bilinear interactions, we use harmonic analysis techniques, such as orthogonality arguments in the Fourier space and atomic decompositions of functions. Most importantly, we construct and use a refined norm which is well adapted to the geometry of the problem.

Journal ArticleDOI
TL;DR: It is shown that both longitudinal and transverse velocity increments scale on locally averaged dissipation rate, just as postulated by Kolmogorov's refined similarity hypothesis, and that, in isotropic turbulence, a single independent scaling adequately describes fluid turbulence in the inertial range.
Abstract: Using the largest database of isotropic turbulence available to date, generated by the direct numerical simulation (DNS) of the Navier-Stokes equations on an 8192^{3} periodic box, we show that the longitudinal and transverse velocity increments scale identically in the inertial range. By examining the DNS data at several Reynolds numbers, we infer that the contradictory results of the past on the inertial-range universality are artifacts of low Reynolds number and residual anisotropy. We further show that both longitudinal and transverse velocity increments scale on locally averaged dissipation rate, just as postulated by Kolmogorov's refined similarity hypothesis, and that, in isotropic turbulence, a single independent scaling adequately describes fluid turbulence in the inertial range.

Journal ArticleDOI
TL;DR: BDDC methods for vector field problems discretized with Raviart-Thomas finite elements are introduced and an adaptive selection of primal constraints developed to deal with coefficients with high contrast even inside individual subdomains are presented.
Abstract: A BDDC domain decomposition preconditioner is defined by a coarse component, expressed in terms of primal constraints, a weighted average across the interface between the subdomains, and local components given in terms of solvers of local subdomain problems. BDDC methods for vector field problems discretized with Raviart-Thomas finite elements are introduced. The methods are based on a new type of weighted average and an adaptive selection of primal constraints developed to deal with coefficients with high contrast even inside individual subdomains. For problems with very many subdomains, a third level of the preconditioner is introduced. Under the assumption that the subdomains are all built from elements of a coarse triangulation of the given domain, and that the material parameters are constant in each subdomain, a bound is obtained for the condition number of the preconditioned linear system which is independent of the values and the jumps of these parameters across the interface between the subdomains. Numerical experiments, using the PETSc library, are also presented which support the theory and show the effectiveness of the algorithms even for problems not covered by the theory. Included are also experiments with Brezzi-Douglas-Marini finite element approximations.

Journal ArticleDOI
TL;DR: In this paper, the authors established several quantitative results about singular Ricci flows, including estimates on the curvature and volume, and the set of singular times, including the singular curvature of the Ricci flow.
Abstract: We establish several quantitative results about singular Ricci flows, including estimates on the curvature and volume, and the set of singular times.

Journal ArticleDOI
TL;DR: These findings suggest a mechanism by which spiking activity during the slow oscillation acts to maintain network statistics that promote a skewed distribution of neuronal firing rates, and perturbation of that activity by hippocampal replay acts to integrate new memory traces into the existing cortical network.

Journal ArticleDOI
TL;DR: In this article, a uniform recovery guarantee for the local sparsity in levels class was proposed. But this was based on a variant of the standard restricted isometry property for sparse in levels vectors, known as the restricted Isometry property in levels.