scispace - formally typeset
Search or ask a question

Showing papers by "Courant Institute of Mathematical Sciences published in 2013"


Posted Content
TL;DR: This integrated framework for using Convolutional Networks for classification, localization and detection is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 and obtained very competitive results for the detection and classifications tasks.
Abstract: We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.

902 citations


Posted Content
TL;DR: In this article, the Fourier domain is used to accelerate the training and inference of convolutional networks on a GPU architecture, which can yield improvements of over an order of magnitude compared to existing state-of-the-art implementations.
Abstract: Convolutional networks are one of the most widely employed architectures in computer vision and machine learning. In order to leverage their ability to learn complex functions, large amounts of data are required for training. Training a large convolutional network to produce state-of-the-art results can take weeks, even when using modern GPUs. Producing labels using a trained network can also be costly when dealing with web-scale datasets. In this work, we present a simple algorithm which accelerates training and inference by a significant factor, and can yield improvements of over an order of magnitude compared to existing state-of-the-art implementations. This is done by computing convolutions as pointwise products in the Fourier domain while reusing the same transformed feature map many times. The algorithm is implemented on a GPU architecture and addresses a number of related challenges.

463 citations


Posted Content
TL;DR: This work addresses multi-class segmentation of indoor scenes with RGB-D inputs by applying a multiscale convolutional network to learn features directly from the images and the depth information.
Abstract: This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. We obtain state-of-the-art on the NYU-v2 depth dataset with an accuracy of 64.5%. We illustrate the labeling of indoor scenes in videos sequences that could be processed in real-time using appropriate hardware such as an FPGA.

406 citations


Proceedings Article
16 Jun 2013
TL;DR: This paper proposed a method to automatically adjust multiple learning rates so as to minimize the expected error at any one time, which relies on local gradient variations across samples, making it suitable for non-stationary problems, and the resulting algorithm matches the performance of SGD or other adaptive approaches with their best settings obtained through systematic search.
Abstract: The performance of stochastic gradient descent (SGD) depends critically on how learning rates are tuned and decreased over time. We propose a method to automatically adjust multiple learning rates so as to minimize the expected error at any one time. The method relies on local gradient variations across samples. In our approach, learning rates can increase as well as decrease, making it suitable for non-stationary problems. Using a number of convex and non-convex learning tasks, we show that the resulting algorithm matches the performance of SGD or other adaptive approaches with their best settings obtained through systematic search, and effectively removes the need for learning rate tuning.

395 citations


Journal ArticleDOI
TL;DR: A new methodology for density estimation that builds on the one developed by Tabak and Vanden‐Eijnden, normalizes the data points through the composition of simple maps and determines the parameters of each map through the maximization of a local quadratic approximation to the log‐likelihood.
Abstract: A new methodology for density estimation is proposed. The methodology, which builds on the one developed by Tabak and Vanden-Eijnden, normalizes the data points through the composition of simple maps. The parameters of each map are determined through the maximization of a local quadratic approximation to the log-likelihood. Various candidates for the elementary maps of each step are proposed; criteria for choosing one includes robustness, computational simplicity, and good behavior in high-dimensional settings. A good choice is that of localized radial expansions, which depend on a single parameter: all the complexity of arbitrary, possibly convoluted probability densities can be built through the composition of such simple maps. © 2012 Wiley Periodicals, Inc.

330 citations


Journal ArticleDOI
22 May 2013-PLOS ONE
TL;DR: This paper describes the ROSIE server infrastructure, a step-by-step ‘serverification’ protocol for use by Rosetta developers, and the deployment of the first nine ROSIE applications by six separate developer teams.
Abstract: The Rosetta molecular modeling software package provides experimentally tested and rapidly evolving tools for the 3D structure prediction and high-resolution design of proteins, nucleic acids, and a growing number of non-natural polymers. Despite its free availability to academic users and improving documentation, use of Rosetta has largely remained confined to developers and their immediate collaborators due to the code’s difficulty of use, the requirement for large computational resources, and the unavailability of servers for most of the Rosetta applications. Here, we present a unified web framework for Rosetta applications called ROSIE (Rosetta Online Server that Includes Everyone). ROSIE provides (a) a common user interface for Rosetta protocols, (b) a stable application programming interface for developers to add additional protocols, (c) a flexible back-end to allow leveraging of computer cluster resources shared by RosettaCommons member institutions, and (d) centralized administration by the RosettaCommons to ensure continuous maintenance. This paper describes the ROSIE server infrastructure, a step-by-step ‘serverification’ protocol for use by Rosetta developers, and the deployment of the first nine ROSIE applications by six separate developer teams: Docking, RNA de novo, ERRASER, Antibody, Sequence Tolerance, Supercharge, Beta peptide design, NCBB design, and VIP redesign. As illustrated by the number and diversity of these applications, ROSIE offers a general and speedy paradigm for serverification of Rosetta applications that incurs negligible cost to developers and lowers barriers to Rosetta use for the broader biological community. ROSIE is available at http://rosie.rosettacommons.org.

326 citations


Journal ArticleDOI
TL;DR: This study enables detailed information about the bottom of the energy landscape, including the absolute minimum, and the other local minima, and describes an interesting layered structure of the low critical values for the Hamiltonians of these models.
Abstract: CERN ´ Y Abstract. We give an asymptotic evaluation of the complexity of spherical p-spin spin- glass models via random matrix theory. This study enables us to obtain detailed infor- mation about the bottom of the energy landscape, including the absolute minimum (the ground state), the other local minima, and describe an interesting layered structure of the low critical values for the Hamiltonians of these models. We also show that our ap- proach allows us to compute the related TAP-complexity and extend the results known in the physics literature. As an independent tool, we prove a LDP for the k-th largest eigenvalue of the GOE, extending the results of (BDG01). How many critical values of given index and below a given level does a typical random Morse function have on a high dimensional manifold? Our work addresses this question in a very special case. We look at certain natural random Gaussian functions on the N- dimensional sphere known as p-spin spherical spin glass models. We cannot yet answer the question above about the typical number, but we can study thoroughly the mean number, which we show is exponentially large in N. We introduce a new identity, based on the classical Kac-Rice formula, relating random matrix theory and the problem of counting these critical values. Using this identity and tools from random matrix theory, we give an asymptotic evaluation of the complexity of these spherical spin-glass models. The complexity mentioned here is defined as the mean number of critical points of given index whose value is below (or above) a given level. This includes the important question of counting the mean number of local minima below a given level, and in particular the question of finding the ground state energy (the minimal value of the Hamiltonian). We show that this question is directly related to the study of the edge of the spectrum of the Gaussian Orthogonal Ensemble (GOE). The question of computing the complexity of mean-field spin glass models has recently been thoroughly studied in the physics literature (see for example (CLR03) and the refer- ences therein), mainly for a different measure of the complexity, i.e. the mean number of solutions to the Thouless-Anderson-Palmer equations, or TAP-complexity. Our approach to the complexity enables us to recover known results in the physics literature about TAP-complexity, to compute the ground state energy (when p is even), and to describe an interesting layered structure of the low energy levels of the Hamiltonians of these models, which might prove useful for the study of the metastability of Langevin dynamics for these models (in longer time scales than those studied in (BDG01)). The paper is organised as follows. In Section 2, we give our main results. In Section 3, we prove two main formulas (Theorem 2.1 and 2.2), relating random matrix theory (specif- ically the GOE) and spherical spin glasses. These formulas are direct consequences of the Kac-Rice formula (we learned the version needed here in the book (AT07), for another modern account see (AW09)). The main new ingredient is the fact that, for spherical spin- glass models, the Hessian of the Hamiltonian at a critical point, conditioned on the value of the Hamiltonian, is a symmetric Gaussian random matrix with independent entries (up to symmetry) plus a diagonal matrix. This implies, in particular, that it is possible to

296 citations


Journal ArticleDOI
TL;DR: In this article, a basic kinetic model for a suspension of self-propelled rod-like particles and its stability and nonlinear dynamics are discussed. And extensions of this model to analyze the effective rheology of active suspensions in external flows, the effect of steric interactions in concentrated systems, and the dynamics of chemotactically responsive suspensions in chemical fields.

222 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examined how mode of play in an educational mathematics video game impacts learning, performance, and motivation and found that competition and collaboration elicited greater situational interest and enjoyment and invoked a stronger mastery goal orientation.
Abstract: The present research examined how mode of play in an educational mathematics video game impacts learning, performance, and motivation. The game was designed for the practice and automation of arithmetic skills to increase fluency and was adapted to allow for individual, competitive, or collaborative game play. Participants (N 58) from urban middle schools were randomly assigned to each experimental condition. Results suggested that, in comparison to individual play, competition increased in-game learning, whereas collaboration decreased performance during the experimental play session. Although out-of-game math fluency improved overall, it did not vary by condition. Furthermore, competition and collaboration elicited greater situational interest and enjoyment and invoked a stronger mastery goal orientation. Additionally, collaboration resulted in stronger intentions to play the game again and to recommend it to others. Results are discussed in terms of the potential for mathematics learning games and technology to increase student learning and motivation and to demonstrate how different modes of engagement can inform the instructional design of such games.

218 citations


Journal ArticleDOI
TL;DR: Evidence is provided that the auditory system summarizes the temporal details of sounds using time-averaged statistics, which, for different examples of the same texture, converge to the same values with increasing duration, indicating that once these sounds are of moderate length, the brain's representation is limited to time-aversaged statistics.
Abstract: Sensory signals are transduced at high resolution, but their structure must be stored in a more compact format. Here we provide evidence that the auditory system summarizes the temporal details of sounds using time-averaged statistics. We measured discrimination of 'sound textures' that were characterized by particular statistical properties, as normally result from the superposition of many acoustic features in auditory scenes. When listeners discriminated examples of different textures, performance improved with excerpt duration. In contrast, when listeners discriminated different examples of the same texture, performance declined with duration, a paradoxical result given that the information available for discrimination grows with duration. These results indicate that once these sounds are of moderate length, the brain's representation is limited to time-averaged statistics, which, for different examples of the same texture, converge to the same values with increasing duration. Such statistical representations produce good categorical discrimination, but limit the ability to discern temporal detail.

215 citations


Journal ArticleDOI
TL;DR: This paper presents a systematic, high-order approach that works for any singularity (including hypersingular kernels), based only on the assumption that the field induced by the integral operator is locally smooth when restricted to either the interior or the exterior.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the crucial role of the two-dimensional Ising model in stimulating the development of the theory of Toeplitz determinants and their applications.
Abstract: We review some history and some recent results concerning Toeplitz determinants and their applications. We discuss, in particular, the crucial role of the two-dimensional Ising model in stimulating the development of the theory of Toeplitz determinants.

Journal ArticleDOI
TL;DR: In this paper, the authors obtained spectra in the wavelength range λ = 995-1769 nm of all known planets orbiting the star HR 8799 using the suite of instrumentation known as Project 1640 on the Palomar 5 m Hale Telescope.
Abstract: We obtained spectra in the wavelength range λ = 995-1769 nm of all four known planets orbiting the star HR 8799 Using the suite of instrumentation known as Project 1640 on the Palomar 5 m Hale Telescope, we acquired data at two epochs This allowed for multiple imaging detections of the companions and multiple extractions of low-resolution (R ~ 35) spectra Data reduction employed two different methods of speckle suppression and spectrum extraction, both yielding results that agree The spectra do not directly correspond to those of any known objects, although similarities with L and T dwarfs are present, as well as some characteristics similar to planets such as Saturn We tentatively identify the presence of CH_4 along with NH_3 and/or C_2H_2, and possibly CO_2 or HCN in varying amounts in each component of the system Other studies suggested red colors for these faint companions, and our data confirm those observations Cloudy models, based on previous photometric observations, may provide the best explanation for the new data presented here Notable in our data is that these presumably co-eval objects of similar luminosity have significantly different spectra; the diversity of planets may be greater than previously thought The techniques and methods employed in this paper represent a new capability to observe and rapidly characterize exoplanetary systems in a routine manner over a broad range of planet masses and separations These are the first simultaneous spectroscopic observations of multiple planets in a planetary system other than our own

Journal ArticleDOI
TL;DR: In this article, uniform regularity estimates for a family of elliptic operators arising in the theory of homogenization, with rapidly oscillating periodic coefficients, were studied, and sharp $W^{1,p}$ estimates, Lipschitz estimates, and nontangential maximal function estimates, which are uniform in the parameter $\varepsilon$, were established.
Abstract: The main purpose of this work is to study uniform regularity estimates for a family of elliptic operators $\{\mathcal{L}_\varepsilon, \varepsilon>0\}$, arising in the theory of homogenization, with rapidly oscillating periodic coefficients. We establish sharp $W^{1,p}$ estimates, Lipschitz estimates, and nontangential maximal function estimates, which are uniform in the parameter $\varepsilon$, on solutions with Neumann boundary conditions in $C^{1,\alpha}$ domains.

Journal ArticleDOI
TL;DR: Two methods for incorporating structure priors into GRN inference are developed, both of which extend the previously described Inferelator framework, enabling the use of prior information.
Abstract: Motivation: Inferring global regulatory networks (GRNs) from genome-wide data is a computational challenge central to the field of systems biology. Although the primary data currently used to infer GRNs consist of gene expression and proteomics measurements, there is a growing abundance of alternate data types that can reveal regulatory interactions, e.g. ChIP-Chip, literature-derived interactions, protein–protein interactions. GRN inference requires the development of integrative methods capable of using these alternate data as priors on the GRN structure. Each source of structure priors has its unique biases and inherent potential errors; thus, GRN methods using these data must be robust to noisy inputs. Results: We developed two methods for incorporating structure priors into GRN inference. Both methods [Modified Elastic Net (MEN) and Bayesian Best Subset Regression (BBSR)] extend the previously described Inferelator framework, enabling the use of prior information. We test our methods on one synthetic and two bacterial datasets, and show that both MEN and BBSR infer accurate GRNs even when the structure prior used has significant amounts of error (>90% erroneous interactions). We find that BBSR outperforms MEN at inferring GRNs from expression data and noisy structure priors. Availability and implementation: Code, datasets and networks presented in this article are available at http://bonneaulab.bio.nyu.edu/software.html. Contact: [email protected] Supplementary information:Supplementary data are available at Bioinformatics online.

Journal ArticleDOI
TL;DR: This work proposes a new subgraph isomorphism algorithm which applies a search strategy to significantly reduce the search space without using any complex pruning rules or domain reduction procedures.
Abstract: Graphs can represent biological networks at the molecular, protein, or species level. An important query is to find all matches of a pattern graph to a target graph. Accomplishing this is inherently difficult (NP-complete) and the efficiency of heuristic algorithms for the problem may depend upon the input graphs. The common aim of existing algorithms is to eliminate unsuccessful mappings as early as and as inexpensively as possible. We propose a new subgraph isomorphism algorithm which applies a search strategy to significantly reduce the search space without using any complex pruning rules or domain reduction procedures. We compare our method with the most recent and efficient subgraph isomorphism algorithms (VFlib, LAD, and our C++ implementation of FocusSearch which was originally distributed in Modula2) on synthetic, molecules, and interaction networks data. We show a significant reduction in the running time of our approach compared with these other excellent methods and show that our algorithm scales well as memory demands increase. Subgraph isomorphism algorithms are intensively used by biochemical tools. Our analysis gives a comprehensive comparison of different software approaches to subgraph isomorphism highlighting their weaknesses and strengths. This will help researchers make a rational choice among methods depending on their application. We also distribute an open-source package including our system and our own C++ implementation of FocusSearch together with all the used datasets ( http://ferrolab.dmi.unict.it/ri.html ). In future work, our findings may be extended to approximate subgraph isomorphism algorithms.

Journal ArticleDOI
TL;DR: In this paper, a new class of physics constrained multi-level quadratic regression models are introduced, analysed and applied to build reduced stochastic models from data of nonlinear systems.
Abstract: A central issue in contemporary science is the development of data driven statistical nonlinear dynamical models for time series of partial observations of nature or a complex physical model. It has been established recently that ad hoc quadratic multi-level regression (MLR) models can have finite-time blow up of statistical solutions and/or pathological behaviour of their invariant measure. Here a new class of physics constrained multi-level quadratic regression models are introduced, analysed and applied to build reduced stochastic models from data of nonlinear systems. These models have the advantages of incorporating memory effects in time as well as the nonlinear noise from energy conserving nonlinear interactions. The mathematical guidelines for the performance and behaviour of these physics constrained MLR models as well as filtering algorithms for their implementation are developed here. Data driven applications of these new multi-level nonlinear regression models are developed for test models involving a nonlinear oscillator with memory effects and the difficult test case of the truncated Burgers–Hopf model. These new physics constrained quadratic MLR models are proposed here as process models for Bayesian estimation through Markov chain Monte Carlo algorithms of low frequency behaviour in complex physical data.

Posted Content
TL;DR: In this article, the authors cast the problem of selecting the reserve price to optimize revenue as a learning problem and present a full theoretical analysis dealing with the complex properties of the corresponding loss function.
Abstract: Second-price auctions with reserve play a critical role for modern search engine and popular online sites since the revenue of these companies often directly de- pends on the outcome of such auctions. The choice of the reserve price is the main mechanism through which the auction revenue can be influenced in these electronic markets. We cast the problem of selecting the reserve price to optimize revenue as a learning problem and present a full theoretical analysis dealing with the complex properties of the corresponding loss function. We further give novel algorithms for solving this problem and report the results of several experiments in both synthetic and real data demonstrating their effectiveness.

Proceedings Article
16 Jan 2013
TL;DR: In this article, a multiscale convolutional network is applied to learn features directly from the images and the depth information, achieving state-of-the-art performance on the NYU-v2 depth dataset with an accuracy of 64.5%.
Abstract: This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. We obtain state-of-the-art on the NYU-v2 depth dataset with an accuracy of 64.5%. We illustrate the labeling of indoor scenes in videos sequences that could be processed in real-time using appropriate hardware such as an FPGA.

Journal ArticleDOI
TL;DR: In this article, a dry general circulation model is used to investigate how coupling between the stratospheric polar vortex and the extratropical tropospheric circulation depends on the latitude of the Tropospheric jet.
Abstract: A dry general circulation model is used to investigate how coupling between the stratospheric polar vortex and the extratropical tropospheric circulation depends on the latitude of the tropospheric jet. The tropospheric response to an identical stratospheric vortex configuration is shown to be strongest for a jet centered near 40° and weaker for jets near either 30° or 50° by more than a factor of 3. Stratosphere-focused mechanisms based on stratospheric potential vorticity inversion, eddy phase speed, and planetary wave reflection, as well as arguments based on tropospheric eddy heat flux and zonal length scale, appear to be incapable of explaining the differences in the magnitude of the jet shift. In contrast, arguments based purely on tropospheric variability involving the strength of eddy–zonal mean flow feedbacks and jet persistence, and related changes in the synoptic eddy momentum flux, appear to explain this effect. The dependence of coupling between the stratospheric polar vortex and the ...

Journal ArticleDOI
TL;DR: A high-frequency asymptotics approach within the Lagrangian framework shows that some exact equatorially trapped three-dimensional waves are linearly unstable when their steepness exceeds a specific threshold.
Abstract: [1] A high-frequency asymptotics approach within the Lagrangian framework shows that some exact equatorially trapped three-dimensional waves are linearly unstable when their steepness exceeds a specific threshold.

Proceedings Article
05 Dec 2013
TL;DR: Two new learning kernel algorithms are devised: one based on a convex optimization problem for which the author gives an efficient solution using existing learning kernel techniques, and another that can be formulated as a DC-programming problem forwhich the author describes a solution in detail.
Abstract: We use the notion of local Rademacher complexity to design new algorithms for learning kernels. Our algorithms thereby benefit from the sharper learning bounds based on that notion which, under certain general conditions, guarantee a faster convergence rate. We devise two new learning kernel algorithms: one based on a convex optimization problem for which we give an efficient solution using existing learning kernel techniques, and another one that can be formulated as a DC-programming problem for which we describe a solution in detail. We also report the results of experiments with both algorithms in both binary and multi-class classification tasks.

Proceedings Article
18 Dec 2013
TL;DR: In this article, a recursive convolutional network whose weights are tied between layers is employed to assess the independent contributions of three of these linked variables: the number of layers, feature maps, and parameters.
Abstract: A key challenge in designing convolutional network models is sizing them appropriately. Many factors are involved in these decisions, including number of layers, feature maps, kernel sizes, etc. Complicating this further is the fact that each of these influence not only the numbers and dimensions of the activation units, but also the total number of parameters. In this paper we focus on assessing the independent contributions of three of these linked variables: The numbers of layers, feature maps, and parameters. To accomplish this, we employ a recursive convolutional network whose weights are tied between layers; this allows us to vary each of the three factors in a controlled setting. We find that while increasing the numbers of layers and parameters each have clear benefit, the number of feature maps (and hence dimensionality of the representation) appears ancillary, and finds most of its benefit through the introduction of more weights. Our results (i) empirically confirm the notion that adding layers alone increases computational power, within the context of convolutional layers, and (ii) suggest that precise sizing of convolutional feature map dimensions is itself of little concern; more attention should be paid to the number of parameters in these layers instead.

Journal ArticleDOI
TL;DR: In this paper, a framework for low-order predictive statistical modeling and uncertainty quantification in turbulent dynamical systems is developed, based on modified quasilinear Gaussian (ROMQG) algorithms.
Abstract: A framework for low-order predictive statistical modeling and uncertainty quantification in turbulent dynamical systems is developed here. These reduced-order, modified quasilinear Gaussian (ROMQG) algorithms apply to turbulent dynamical systems in which there is significant linear instability or linear nonnormal dynamics in the unperturbed system and energy-conserving nonlinear interactions that transfer energy from the unstable modes to the stable modes where dissipation occurs, resulting in a statistical steady state; such turbulent dynamical systems are ubiquitous in geophysical and engineering turbulence. The ROMQG method involves constructing a low-order, nonlinear, dynamical system for the mean and covariance statistics in the reduced subspace that has the unperturbed statistics as a stable fixed point and optimally incorporates the indirect effect of non-Gaussian third-order statistics for the unperturbed system in a systematic calibration stage. This calibration procedure is achieved through information involving only the mean and covariance statistics for the unperturbed equilibrium. The performance of the ROMQG algorithm is assessed on two stringent test cases: the 40-mode Lorenz 96 model mimicking midlatitude atmospheric turbulence and two-layer baroclinic models for high-latitude ocean turbulence with over 125,000 degrees of freedom. In the Lorenz 96 model, the ROMQG algorithm with just a single mode captures the transient response to random or deterministic forcing. For the baroclinic ocean turbulence models, the inexpensive ROMQG algorithm with 252 modes, less than 0.2% of the total, captures the nonlinear response of the energy, the heat flux, and even the one-dimensional energy and heat flux spectra.

Journal ArticleDOI
TL;DR: In this paper, the role of steric vs hydrodynamic interactions in these dense systems is addressed using a continuum theory and numerical simulations, in which a conservation equation for the distribution function of particle configurations is coupled to the Stokes equations for the fluid motion.
Abstract: Suspensions of active particles, such as motile microorganisms and artificial microswimmers, are known to undergo a transition to complex large-scale dynamics at high enough concentrations. While a number of models have demonstrated that hydrodynamic interactions can in some cases explain these dynamics, collective motion in experiments is typically observed at such high volume fractions that steric interactions between nearby swimmers are significant and cannot be neglected. This raises the question of the respective roles of steric vs hydrodynamic interactions in these dense systems, which we address in this paper using a continuum theory and numerical simulations. The model we propose is based on our previous kinetic theory for dilute suspensions, in which a conservation equation for the distribution function of particle configurations is coupled to the Stokes equations for the fluid motion [D. Saintillan and M. J. Shelley, “Instabilities, pattern formation, and mixing in active suspensions,” Phys. Flu...

Journal ArticleDOI
TL;DR: In this article, the authors show that a thermal forcing applied to a narrow region around the equator produces El Nino-like contraction, while a forcing with wider meridional extent produces global warming-like expansion.
Abstract: This study seeks a deeper understanding of the causes of Hadley Cell (HC) expansion, as projected under global warming, and HC contraction, as observed under El Nino. Using an idealized general circulation model, the authors show that a thermal forcing applied to a narrow region around the equator produces “El Nino–like” HC contraction, while a forcing with wider meridional extent produces “global warming–like” HC expansion. These circulation responses are sensitive primarily to the thermal forcing’s meridional structure and are less sensitive to its vertical structure. If the thermal forcing is confined to the midlatitudes, the amount of HC expansion is more than three times that of a forcing of comparable amplitude that is spread over the tropics. This finding may be relevant to recently observed trends of rapid tropical widening.The shift of the HC edge is explained using a very simple model in which the transformed Eulerian mean (TEM) circulation acts to diffuse heat meridionally. In this cont...

Journal ArticleDOI
TL;DR: In this article, it was shown that small smooth perturbations of a constant background exist for all time and remain smooth (never develop shocks) in the Euler-Poisson system.
Abstract: We consider the (repulsive) Euler-Poisson system for the electrons in two dimensions and prove that small smooth perturbations of a constant background exist for all time and remain smooth (never develop shocks). This extends to 2D the work of Guo [6].

Journal ArticleDOI
TL;DR: A detailed phenotypic characterization of complex root traits among 96 accessions of the model Arabidopsis thaliana in two nitrogen environments revealed a surprising level of independence in the control of traits to environment – a highly tunable form of plasticity.
Abstract: Plant development is remarkably plastic but how precisely can the plant customize its form to specific environments? When the plant adjusts its development to different environments, related traits can change in a coordinated fashion, such that two traits co-vary across many genotypes. Alternatively, traits can vary independently, such that a change in one trait has little predictive value for the change in a second trait. To characterize such “tunability” in developmental plasticity, we carried out a detailed phenotypic characterization of complex root traits among 96 accessions of the model Arabidopsis thaliana in two nitrogen environments. The results revealed a surprising level of independence in the control of traits to environment – a highly tunable form of plasticity. We mapped genetic architecture of plasticity using genome-wide association studies and further used gene expression analysis to narrow down gene candidates in mapped regions. Mutants in genes implicated by association and expression analysis showed precise defects in the predicted traits in the predicted environment, corroborating the independent control of plasticity traits. The overall results suggest that there is a pool of genetic variability in plants that controls traits in specific environments, with opportunity to tune crop plants to a given environment.

Journal ArticleDOI
TL;DR: The simplest algorithm introduced here shows excellent performance on a difficult test suite of prototype problems for geophysical turbulence with waves, jets, and vortices, with a speedup of several orders of magnitude compared with direct simulation.
Abstract: Efficient computation of geophysical turbulence, such as occurs in the atmosphere and ocean, is a formidable challenge for the following reasons: the complex combination of waves, jets, and vortices; significant energetic backscatter from unresolved small scales to resolved large scales; a lack of dynamical scale separation between large and small scales; and small-scale instabilities, conditional on the large scales, which do not saturate. Nevertheless, efficient methods are needed to allow large ensemble simulations of sufficient size to provide meaningful quantifications of uncertainty in future predictions and past reanalyses through data assimilation and filtering. Here, a class of efficient stochastic superparameterization algorithms is introduced. In contrast to conventional superparameterization, the method here (i) does not require the simulation of nonlinear eddy dynamics on periodic embedded domains, (ii) includes a better representation of unresolved small-scale instabilities, and (iii) allows efficient representation of a much wider range of unresolved scales. The simplest algorithm implemented here radically improves efficiency by representing small-scale eddies at and below the limit of computational resolution by a suitable one-dimensional stochastic model of random-direction plane waves. In contrast to heterogeneous multiscale methods, the methods developed here do not require strong scale separation or conditional equilibration of local statistics. The simplest algorithm introduced here shows excellent performance on a difficult test suite of prototype problems for geophysical turbulence with waves, jets, and vortices, with a speedup of several orders of magnitude compared with direct simulation.

Journal ArticleDOI
TL;DR: This paper reviewed the interdisciplinary contributions over the last decade through the modus operandi of applied mathematics to these important scientific problems and presented novel physical phenomena, new multiscale equations, novel PDEs, and numerical algorithms with the goal of attracting mathematicians and physicists to the tropics.
Abstract: Clouds in the tropics can organize the circulation on planetary scales and profoundly impact long range seasonal forecasting and climate on the entire globe, yet contemporary operational computer models are often deficient in representing these phenomena. On the other hand, contemporary observations reveal remarkably complex coherent waves and vortices in the tropics interacting across a bewildering range of scales from kilometers to ten thousand kilometers. This paper reviews the interdisciplinary contributions over the last decade through the modus operandi of applied mathematics to these important scientific problems. Novel physical phenomena, new multiscale equations, novel PDEs, and numerical algorithms are presented here with the goal of attracting mathematicians and physicists to this exciting research area.