scispace - formally typeset
Search or ask a question

Showing papers by "French Institute for Research in Computer Science and Automation published in 2014"


Journal ArticleDOI
TL;DR: The SDN architecture and the OpenFlow standard in particular are presented, current alternatives for implementation and testing of SDN-based protocols and services are discussed, current and future SDN applications are examined, and promising research directions based on the SDN paradigm are explored.
Abstract: The idea of programmable networks has recently re-gained considerable momentum due to the emergence of the Software-Defined Networking (SDN) paradigm. SDN, often referred to as a ''radical new idea in networking'', promises to dramatically simplify network management and enable innovation through network programmability. This paper surveys the state-of-the-art in programmable networks with an emphasis on SDN. We provide a historic perspective of programmable networks from early ideas to recent developments. Then we present the SDN architecture and the OpenFlow standard in particular, discuss current alternatives for implementation and testing of SDN-based protocols and services, examine current and future SDN applications, and explore promising research directions based on the SDN paradigm.

2,013 citations


Proceedings ArticleDOI
24 Feb 2014
TL;DR: This study designs an accelerator for large-scale CNNs and DNNs, with a special emphasis on the impact of memory on accelerator design, performance and energy, and shows that it is possible to design an accelerator with a high throughput, capable of performing 452 GOP/s in a small footprint.
Abstract: Machine-Learning tasks are becoming pervasive in a broad range of domains, and in a broad range of systems (from embedded systems to data centers). At the same time, a small set of machine-learning algorithms (especially Convolutional and Deep Neural Networks, i.e., CNNs and DNNs) are proving to be state-of-the-art across many applications. As architectures evolve towards heterogeneous multi-cores composed of a mix of cores and accelerators, a machine-learning accelerator can achieve the rare combination of efficiency (due to the small number of target algorithms) and broad application scope. Until now, most machine-learning accelerator designs have focused on efficiently implementing the computational part of the algorithms. However, recent state-of-the-art CNNs and DNNs are characterized by their large size. In this study, we design an accelerator for large-scale CNNs and DNNs, with a special emphasis on the impact of memory on accelerator design, performance and energy. We show that it is possible to design an accelerator with a high throughput, capable of performing 452 GOP/s (key NN operations such as synaptic weight multiplications and neurons outputs additions) in a small footprint of 3.02 mm2 and 485 mW; compared to a 128-bit 2GHz SIMD processor, the accelerator is 117.87x faster, and it can reduce the total energy by 21.08x. The accelerator characteristics are obtained after layout at 65 nm. Such a high throughput in a small footprint can open up the usage of state-of-the-art machine-learning algorithms in a broad set of systems and for a broad set of applications.

1,582 citations


Proceedings ArticleDOI
13 Dec 2014
TL;DR: This article introduces a custom multi-chip machine-learning architecture, showing that, on a subset of the largest known neural network layers, it is possible to achieve a speedup of 450.65x over a GPU, and reduce the energy by 150.31x on average for a 64-chip system.
Abstract: Many companies are deploying services, either for consumers or industry, which are largely based on machine-learning algorithms for sophisticated processing of large amounts of data. The state-of-the-art and most popular such machine-learning algorithms are Convolutional and Deep Neural Networks (CNNs and DNNs), which are known to be both computationally and memory intensive. A number of neural network accelerators have been recently proposed which can offer high computational capacity/area ratio, but which remain hampered by memory accesses. However, unlike the memory wall faced by processors on general-purpose workloads, the CNNs and DNNs memory footprint, while large, is not beyond the capability of the on chip storage of a multi-chip system. This property, combined with the CNN/DNN algorithmic characteristics, can lead to high internal bandwidth and low external communications, which can in turn enable high-degree parallelism at a reasonable area cost. In this article, we introduce a custom multi-chip machine-learning architecture along those lines. We show that, on a subset of the largest known neural network layers, it is possible to achieve a speedup of 450.65x over a GPU, and reduce the energy by 150.31x on average for a 64-chip system. We implement the node down to the place and route at 28nm, containing a combination of custom storage and computational units, with industry-grade interconnects.

1,486 citations


Journal ArticleDOI
TL;DR: This paper points out the tradeoff between model completeness and real-time constraints, and the fact that the choice of a risk assessment method is influenced by the selected motion model.
Abstract: With the objective to improve road safety, the automotive industry is moving toward more “intelligent” vehicles. One of the major challenges is to detect dangerous situations and react accordingly in order to avoid or mitigate accidents. This requires predicting the likely evolution of the current traffic situation, and assessing how dangerous that future situation might be. This paper is a survey of existing methods for motion prediction and risk assessment for intelligent vehicles. The proposed classification is based on the semantics used to define motion and risk. We point out the tradeoff between model completeness and real-time constraints, and the fact that the choice of a risk assessment method is influenced by the selected motion model.

964 citations


Journal ArticleDOI
TL;DR: Although no single method performed best across all scenarios, the results revealed clear differences between the various approaches, leading to notable practical conclusions for users and developers.
Abstract: Particle tracking is of key importance for quantitative analysis of intracellular dynamic processes from time-lapse microscopy image data. Because manually detecting and following large numbers of individual particles is not feasible, automated computational methods have been developed for these tasks by many groups. Aiming to perform an objective comparison of methods, we gathered the community and organized an open competition in which participating teams applied their own methods independently to a commonly defined data set including diverse scenarios. Performance was assessed using commonly defined measures. Although no single method performed best across all scenarios, the results revealed clear differences between the various approaches, leading to notable practical conclusions for users and developers.

819 citations


Posted Content
TL;DR: This work introduces a new optimisation method called SAGA, which improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal operator is used on the regulariser.
Abstract: In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient algorithms with fast linear convergence rates. SAGA improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal operator is used on the regulariser. Unlike SDCA, SAGA supports non-strongly convex problems directly, and is adaptive to any inherent strong convexity of the problem. We give experimental results showing the effectiveness of our method.

628 citations


Journal ArticleDOI
TL;DR: This work has shown that disocclusion in image-based rendering (IBR) of viewpoints different from those captured by the cameras can be removed in a context of editing.
Abstract: Image inpainting refers to the process of restoring missing or damaged areas in an image. This field of research has been very active over recent years, boosted by numerous applications: restoring images from scratches or text overlays, loss concealment in a context of impaired image transmission, object removal in a context of editing, or disocclusion in image-based rendering (IBR) of viewpoints different from those captured by the cameras. Although earlier work dealing with disocclusion has been published in [1], the term inpainting first appeared in [2] by analogy with a process used in art restoration.

518 citations


Posted Content
TL;DR: It is shown that for many problems related to optimal transport, the set of linear constraints can be split in an intersection of a few simple constraints, for which the projections can be computed in closed form.
Abstract: This article details a general numerical framework to approximate so-lutions to linear programs related to optimal transport. The general idea is to introduce an entropic regularization of the initial linear program. This regularized problem corresponds to a Kullback-Leibler Bregman di-vergence projection of a vector (representing some initial joint distribu-tion) on the polytope of constraints. We show that for many problems related to optimal transport, the set of linear constraints can be split in an intersection of a few simple constraints, for which the projections can be computed in closed form. This allows us to make use of iterative Bregman projections (when there are only equality constraints) or more generally Bregman-Dykstra iterations (when inequality constraints are in-volved). We illustrate the usefulness of this approach to several variational problems related to optimal transport: barycenters for the optimal trans-port metric, tomographic reconstruction, multi-marginal optimal trans-port and in particular its application to Brenier's relaxed solutions of in-compressible Euler equations, partial un-balanced optimal transport and optimal transport with capacity constraints.

506 citations


Journal ArticleDOI
TL;DR: This paper proposes a complete solution to solve multiple least-square quadratic problems of both equality and inequality constraints ordered into a strict hierarchy using a generic solver used to resolve the redundancy of humanoid robots while generating complex movements in constrained environments.
Abstract: Hierarchical least-square optimization is often used in robotics to inverse a direct function when multiple incompatible objectives are involved. Typical examples are inverse kinematics or dynamics. The objectives can be given as equalities to be satisfied e.g. point-to-point task or as areas of satisfaction e.g. the joint range. This paper proposes a complete solution to solve multiple least-square quadratic problems of both equality and inequality constraints ordered into a strict hierarchy. Our method is able to solve a hierarchy of only equalities 10 times faster than the iterative-projection hierarchical solvers and can consider inequalities at any level while running at the typical control frequency on whole-body size problems. This generic solver is used to resolve the redundancy of humanoid robots while generating complex movements in constrained environments.

492 citations


Book ChapterDOI
06 Sep 2014
TL;DR: In large video collections with clusters of typical categories, such as “birthday party” or “flash-mob”, category-specific video summarization can produce higher quality video summaries than unsupervised approaches that are blind to the video category.
Abstract: In large video collections with clusters of typical categories, such as “birthday party” or “flash-mob”, category-specific video summarization can produce higher quality video summaries than unsupervised approaches that are blind to the video category.

430 citations


Posted Content
TL;DR: In this article, a self-contained view of sparse modeling for visual recognition and image processing is presented, where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.
Abstract: In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.

Proceedings ArticleDOI
TL;DR: In this article, the authors considered the problem of an optimal geographic placement of content in wireless cellular networks modelled by Poisson point processes, and formulated an optimal randomized content placement policy to maximize the user's hit probability.
Abstract: In this work we consider the problem of an optimal geographic placement of content in wireless cellular networks modelled by Poisson point processes. Specifically, for the typical user requesting some particular content and whose popularity follows a given law (e.g. Zipf), we calculate the probability of finding the content cached in one of the base stations. Wireless coverage follows the usual signal-to-interference-and noise ratio (SINR) model, or some variants of it. We formulate and solve the problem of an optimal randomized content placement policy, to maximize the user's hit probability. The result dictates that it is not always optimal to follow the standard policy "cache the most popular content, everywhere". In fact, our numerical results regarding three different coverage scenarios, show that the optimal policy significantly increases the chances of hit under high-coverage regime, i.e., when the probabilities of coverage by more than just one station are high enough.

Book
19 Dec 2014
TL;DR: The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing, focusing on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.
Abstract: In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection - that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.

Journal ArticleDOI
TL;DR: This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother, focusing on sparse Laplacian matrices consisting of a data term and a prior term that approximate the solution of the memory- and computation-intensive large linear system by solving a sequence of 1D subsystems.
Abstract: This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory-and computation-intensive large linear system, defined over a d-dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ∼10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 < γ < 2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.

Journal ArticleDOI
19 Nov 2014
TL;DR: A novel approach to interactive character skinning is presented, which is robust to extreme character movements, handles skin contacts and produces the effect of skin elasticity (sliding), and includes new composition operators enabling blending effects and local self-contact between implicit surfaces.
Abstract: We present a novel approach to interactive character skinning, which is robust to extreme character movements, handles skin contacts and produces the effect of skin elasticity (sliding). Our approach builds on the idea of implicit skinning in which the character is approximated by a 3D scalar field and mesh-vertices are appropriately re-projected. Instead of being bound by an initial skinning solution used to initialize the shape at each time step, we use the skin mesh to directly track iso-surfaces of the field over time. Technical problems are two-fold: firstly, all contact surfaces generated between skin parts should be captured as iso-surfaces of the implicit field; secondly, the tracking method should capture elastic skin effects when the joints bend, and as the character returns to its rest shape, so the skin must follow. Our solutions include: new composition operators enabling blending effects and local self-contact between implicit surfaces, as well as a tangential relaxation scheme derived from the as-rigid-as possible energy to solve the tracking problem.

Journal ArticleDOI
TL;DR: This paper presents a realistic synthetic dataset, covering 24 hours of car traffic in a 400-km2 region around the city of Köln, in Germany, and describes the generation process and outline how the dataset improves the traces currently employed for the simulative evaluation of vehicular networks.
Abstract: The surge in vehicular network research has led, over the last few years, to the proposal of countless network solutions specifically designed for vehicular environments. A vast majority of such solutions has been evaluated by means of simulation, since experimental and analytical approaches are often impractical and intractable, respectively. The reliability of the simulative evaluation is thus paramount to the performance analysis of vehicular networks, and the first distinctive feature that has to be properly accounted for is the mobility of vehicles, i.e., network nodes. Notwithstanding the improvements that vehicular mobility modeling has undergone over the last decade, no vehicular mobility dataset is publicly available today that captures both the macroscopic and microscopic dynamics of road traffic over a large urban region. In this paper, we present a realistic synthetic dataset, covering 24 hours of car traffic in a 400- ${\rm km}^{2}$ region around the city of Koln, in Germany. We describe the generation process and outline how the dataset improves the traces currently employed for the simulative evaluation of vehicular networks. We also show the potential impact that such a comprehensive mobility dataset has on the network protocol performance analysis, demonstrating how incomplete representations of vehicular mobility may result in over-optimistic network connectivity and protocol performance.

Journal ArticleDOI
TL;DR: The technical challenges that have been faced and the results achieved from hardware design and embedded programming to vision-based navigation and mapping are described, with an overview of how all the modules work and how they have been integrated into the final system.
Abstract: Autonomous microhelicopters will soon play a major role in tasks like search and rescue, environment monitoring, security surveillance, and inspection. If they are further realized in small scale, they can also be used in narrow outdoor and indoor environments and represent only a limited risk for people. However, for such operations, navigating based only on global positioning system (GPS) information is not sufficient. Fully autonomous operation in cities or other dense environments requires microhelicopters to fly at low altitudes, where GPS signals are often shadowed, or indoors and to actively explore unknown environments while avoiding collisions and creating maps. This involves a number of challenges on all levels of helicopter design, perception, actuation, control, and navigation, which still have to be solved. The Swarm of Micro Flying Robots (SFLY) project was a European Union-funded project with the goal of creating a swarm of vision-controlled microaerial vehicles (MAVs) capable of autonomous navigation, three-dimensional (3-D) mapping, and optimal surveillance coverage in GPS-denied environments. The SFLY MAVs do not rely on remote control, radio beacons, or motion-capture systems but can fly all by themselves using only a single onboard camera and an inertial measurement unit (IMU). This article describes the technical challenges that have been faced and the results achieved from hardware design and embedded programming to vision-based navigation and mapping, with an overview of how all the modules work and how they have been integrated into the final system. Code, data sets, and videos are publicly available to the robotics community. Experimental results demonstrating three MAVs navigating autonomously in an unknown GPS-denied environment and performing 3-D mapping and optimal surveillance coverage are presented.

Journal ArticleDOI
TL;DR: It is shown that optimizing tractography parameters, stopping and seeding strategies can reduce the biases in position, shape, size and length of the streamline distribution, a critical step towards producing tractography results for quantitative structural connectivity analysis.

Proceedings ArticleDOI
24 Sep 2014
TL;DR: In this paper, the feasibility, advantages, and challenges of an ICN-based approach in the Internet of Things are explored, and several interoperable CCN enhancements are then proposed and evaluated.
Abstract: This paper explores the feasibility, advantages, and challenges of an ICN-based approach in the Internet of Things. We report on the first NDN experiments in a life-size IoT deployment, spread over tens of rooms on several floors of a building. Based on the insights gained with these experiments, the paper analyses the shortcomings of CCN applied to IoT. Several interoperable CCN enhancements are then proposed and evaluated. We significantly decreased control traffic (i.e., interest messages) and leverage data path and caching to match IoT requirements in terms of energy and bandwidth constraints. Our optimizations increase content availability in case of IoT nodes with intermittent activity. This paper also provides the first experimental comparison of CCN with the common IoT standards 6LoWPAN/RPL/UDP.

Posted Content
TL;DR: This paper proposes a new method that achieves this goal with only image-level labels of whether the objects are present or not, and combines a discriminative submodular cover problem for automatically discovering a set of positive object windows with a smoothed latent SVM formulation.
Abstract: Learning to localize objects with minimal supervision is an important problem in computer vision, since large fully annotated datasets are extremely costly to obtain. In this paper, we propose a new method that achieves this goal with only image-level labels of whether the objects are present or not. Our approach combines a discriminative submodular cover problem for automatically discovering a set of positive object windows with a smoothed latent SVM formulation. The latter allows us to leverage efficient quasi-Newton optimization techniques. Our experiments demonstrate that the proposed approach provides a 50% relative improvement in mean average precision over the current state-of-the-art on PASCAL VOC 2007 detection.

Proceedings ArticleDOI
TL;DR: The first NDN experiments in a life-size IoT deployment, spread over tens of rooms on several floors of a building, are reported on, and the shortcomings of CCN applied to IoT are analyzed.
Abstract: This paper explores the feasibility, advantages, and challenges of an ICN-based approach in the Internet of Things. We report on the first NDN experiments in a life-size IoT deployment, spread over tens of rooms on several floors of a building. Based on the insights gained with these experiments, the paper analyses the shortcomings of CCN applied to IoT. Several interoperable CCN enhancements are then proposed and evaluated. We significantly decreased control traffic (i.e., interest messages) and leverage data path and caching to match IoT requirements in terms of energy and bandwidth constraints. Our optimizations increase content availability in case of IoT nodes with intermittent activity. This paper also provides the first experimental comparison of CCN with the common IoT standards 6LoWPAN/RPL/UDP.

Journal ArticleDOI
TL;DR: It is shown that for one-vs-rest, learning through cross-validation the optimal degree of imbalance between the positive and the negative samples can have a significant impact and early stopping can be used as an effective regularization strategy when training with stochastic gradient algorithms.
Abstract: We benchmark several SVM objective functions for large-scale image classification. We consider one-versus-rest, multiclass, ranking, and weighted approximate ranking SVMs. A comparison of online and batch methods for optimizing the objectives shows that online methods perform as well as batch methods in terms of classification accuracy, but with a significant gain in training speed. Using stochastic gradient descent, we can scale the training to millions of images and thousands of classes. Our experimental evaluation shows that ranking-based algorithms do not outperform the one-versus-rest strategy when a large number of training examples are used. Furthermore, the gap in accuracy between the different algorithms shrinks as the dimension of the features increases. We also show that learning through cross-validation the optimal rebalancing of positive and negative examples can result in a significant improvement for the one-versus-rest strategy. Finally, early stopping can be used as an effective regularization strategy when training with online algorithms. Following these "good practices," we were able to improve the state of the art on a large subset of 10K classes and 9M images of ImageNet from 16.7 percent Top-1 accuracy to 19.1 percent.

Journal ArticleDOI
TL;DR: In this article, the authors propose an axiomatic generic framework for weak memory modeling, which allows the user to specify the model of his choice in a concise way, and the tool becomes a simulator for that model.
Abstract: We propose an axiomatic generic framework for modelling weak memory. We show how to instantiate this framework for Sequential Consistency (SC), Total Store Order (TSO), Cpp restricted to release-acquire atomics, and Power. For Power, we compare our model to a preceding operational model in which we found a flaw. To do so, we define an operational model that we show equivalent to our axiomatic model. We also propose a model for ARM. Our testing on this architecture revealed a behaviour later acknowledged as a bug by ARM, and more recently, 31 additional anomalies. We offer a new simulation tool, called herd, which allows the user to specify the model of his choice in a concise way. Given a specification of a model, the tool becomes a simulator for that model. The tool relies on an axiomatic description; this choice allows us to outperform all previous simulation tools. Additionally, we confirm that verification time is vastly improved, in the case of bounded model checking. Finally, we put our models in perspective, in the light of empirical data obtained by analysing the C and Cpp code of a Debian Linux distribution. We present our new analysis tool, called mole, which explores a piece of code to find the weak memory idioms that it uses.

Journal ArticleDOI
TL;DR: The first model-based clustering algorithm for multivariate functional data is proposed, based on the assumption of normality of the principal component scores, and it ability to take into account the dependence among curves.

Journal ArticleDOI
22 May 2014
TL;DR: It is shown that there is a growing body of literature that evidences the capabilities, but also the limitations and challenges of affect detection from neurophysiological activity, and possible applications of aBCI in a general taxonomy of brain-computer interface approaches.
Abstract: Affective states, moods and emotions, are an integral part of human nature: they shape our thoughts, govern the behavior of the individual, and influence our interpersonal relationships. The last decades have seen a growing interest in the automatic detection of such states from voice, facial expression, and physiological signals, primarily with the goal of enhancing human-computer interaction with an affective component. With the advent of brain-computer interface research, the idea of affective brain-computer interfaces (aBCI), enabling affect detection from brain signals, arose. In this article, we set out to survey the field of neurophysiology-based affect detection. We outline possible applications of aBCI in a general taxonomy of brain-computer interface approaches and introduce the core concepts of affect and their neurophysiological fundamentals. We show that there is a growing body of literature that evidences the capabilities, but also the limitations and challenges of affect detection from neurophysiological activity.

Journal ArticleDOI
TL;DR: Two approaches for applying group-level PCA are presented; both give a close approximation to the output of PCA applied to full concatenation of all individual datasets, while having very low memory requirements regardless of the number of datasets being combined.

Journal ArticleDOI
TL;DR: In this article, the regret of the decision maker is the difference between her realized loss and the minimal loss she would have achieved by picking, in hindsight, the best possible action.
Abstract: We address online linear optimization problems when the possible actions of the decision maker are represented by binary vectors. The regret of the decision maker is the difference between her realized loss and the minimal loss she would have achieved by picking, in hindsight, the best possible action. Our goal is to understand the magnitude of the best possible minimax regret. We study the problem under three different assumptions for the feedback the decision maker receives: full information, and the partial information models of the so-called “semi-bandit” and “bandit” problems. In the full information case we show that the standard exponentially weighted average forecaster is a provably suboptimal strategy. For the semi-bandit model, by combining the Mirror Descent algorithm and the INF Implicitely Normalized Forecaster strategy, we are able to prove the first optimal bounds. Finally, in the bandit case we discuss existing results in light of a new lower bound, and suggest a conjecture on the optimal regret in that case.

Journal ArticleDOI
16 Jan 2014-Nature
TL;DR: It is demonstrated that AHP6-based fields establish patterns of cytokinin signalling in the meristem that contribute to the robustness of phyllotaxis by imposing a temporal sequence on organ initiation.
Abstract: How biological systems generate reproducible patterns with high precision is a central question in science1. The shoot apical meristem (SAM), a specialized tissue producing plant aerial organs, is a developmental system of choice to address this question. Organs are periodically initiated at the SAM at specific spatial positions and this spatiotemporal pattern defines phyllotaxis. Accumulation of the plant hormone auxin triggers organ initiation2, 3, 4, 5, whereas auxin depletion around organs generates inhibitory fields that are thought to be sufficient to maintain these patterns and their dynamics4, 6, 7, 8, 9, 10, 11, 12, 13. Here we show that another type of hormone-based inhibitory fields, generated directly downstream of auxin by intercellular movement of the cytokinin signalling inhibitor ARABIDOPSIS HISTIDINE PHOSPHOTRANSFER PROTEIN 6 (AHP6)14, is involved in regulating phyllotactic patterns. We demonstrate that AHP6-based fields establish patterns of cytokinin signalling in the meristem that contribute to the robustness of phyllotaxis by imposing a temporal sequence on organ initiation. Our findings indicate that not one but two distinct hormone-based fields may be required for achieving temporal precision during formation of reiterative structures at the SAM, thus indicating an original mechanism for providing robustness to a dynamic developmental system.

Journal ArticleDOI
TL;DR: A numerical method for the solution of the elliptic Monge-Ampere Partial Differential Equation, with boundary conditions corresponding to the Optimal Transportation problem, is presented, leading to a fast solver comparable to solving the Laplace equation on the same grid several times.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the properties of different geometric filtered complexes (such as Vietoris-Rips, Cech and witness complexes) built on top of totally bounded metric spaces and provided simple and natural proofs of the stability of such complexes with respect to the Gromov-Hausdorff distance.
Abstract: In this paper we study the properties of the homology of different geometric filtered complexes (such as Vietoris–Rips, Cech and witness complexes) built on top of totally bounded metric spaces. Using recent developments in the theory of topological persistence, we provide simple and natural proofs of the stability of the persistent homology of such complexes with respect to the Gromov–Hausdorff distance. We also exhibit a few noteworthy properties of the homology of the Rips and Cech complexes built on top of compact spaces.