scispace - formally typeset
Search or ask a question

Showing papers by "École normale supérieure de Cachan published in 2016"


Journal ArticleDOI
TL;DR: It is recommended that the typical maculopapular cutaneous lesions (urticaria pigmentosa) should be subdivided into 2 variants, namely a monomorphic variant with small maculipapular lesions, which is typically seen in adult patients, and a polymorphic variant with larger lesions of variable size and shape, which has important prognostic implications.
Abstract: Cutaneous lesions in patients with mastocytosis are highly heterogeneous and encompass localized and disseminated forms. Although a classification and criteria for cutaneous mastocytosis (CM) have been proposed, there remains a need to better define subforms of cutaneous manifestations in patients with mastocytosis. To address this unmet need, an international task force involving experts from different organizations (including the European Competence Network on Mastocytosis; the American Academy of Allergy, Asthma & Immunology; and the European Academy of Allergology and Clinical Immunology) met several times between 2010 and 2014 to discuss the classification and criteria for diagnosis of cutaneous manifestations in patients with mastocytosis. This article provides the major outcomes of these meetings and a proposal for a revised definition and criteria. In particular, we recommend that the typical maculopapular cutaneous lesions (urticaria pigmentosa) should be subdivided into 2 variants, namely a monomorphic variant with small maculopapular lesions, which is typically seen in adult patients, and a polymorphic variant with larger lesions of variable size and shape, which is typically seen in pediatric patients. Clinical observations suggest that the monomorphic variant, if it develops in children, often persists into adulthood, whereas the polymorphic variant may resolve around puberty. This delineation might have important prognostic implications, and its implementation in diagnostic algorithms and future mastocytosis classifications is recommended. Refinements are also suggested for the diagnostic criteria of CM, removal of telangiectasia macularis eruptiva perstans from the current classification of CM, and removal of the adjunct solitary from the term solitary mastocytoma.

259 citations


Proceedings ArticleDOI
04 Aug 2016
TL;DR: A huge leap forward in action detection performance is achieved and 20% and 11% gain in mAP are reported on UCF-101 and J-HMDB-21 datasets respectively when compared to the state-of-the-art.
Abstract: In this work, we propose an approach to the spatiotemporal localisation (detection) and classification of multiple concurrent actions within temporally untrimmed videos. Our framework is composed of three stages. In stage 1, appearance and motion detection networks are employed to localise and score actions from colour images and optical flow. In stage 2, the appearance network detections are boosted by combining them with the motion detection scores, in proportion to their respective spatial overlap. In stage 3, sequences of detection boxes most likely to be associated with a single action instance, called action tubes, are constructed by solving two energy maximisation problems via dynamic programming. While in the first pass, action paths spanning the whole video are built by linking detection boxes over time using their class-specific scores and their spatial overlap, in the second pass, temporal trimming is performed by ensuring label consistency for all constituting detection boxes. We demonstrate the performance of our algorithm on the challenging UCF101, J-HMDB-21 and LIRIS-HARL datasets, achieving new state-of-the-art results across the board and significantly increasing detection speed at test time.

223 citations


Journal ArticleDOI
TL;DR: An algorithm and a tool bcalm 2 is presented for the compaction of de Bruijn graphs, a parallel algorithm that distributes the input based on a minimizer hashing technique, allowing for good balance of memory usage throughout its execution.
Abstract: Motivation: As the quantity of data per sequencing experiment increases, the challenges of fragment assembly are becoming increasingly computational. The de Bruijn graph is a widely used data structure in fragment assembly algorithms, used to represent the information from a set of reads. Compaction is an important data reduction step in most de Bruijn graph based algorithms where long simple paths are compacted into single vertices. Compaction has recently become the bottleneck in assembly pipelines, and improving its running time and memory usage is an important problem. Results: We present an algorithm and a tool bcalm 2 for the compaction of de Bruijn graphs. bcalm 2 is a parallel algorithm that distributes the input based on a minimizer hashing technique, allowing for good balance of memory usage throughout its execution. For human sequencing data, bcalm 2 reduces the computational burden of compacting the de Bruijn graph to roughly an hour and 3 GB of memory. We also applied bcalm 2 to the 22 Gbp loblolly pine and 20 Gbp white spruce sequencing datasets. Compacted graphs were constructed from raw reads in less than 2 days and 40 GB of memory on a single machine. Hence, bcalm 2 is at least an order of magnitude more efficient than other available methods. Availability and Implementation: Source code of bcalm 2 is freely available at: https://github.com/GATB/bcalm Contact: rf.1ellil-vinu@ihkihc.nayar

187 citations


Proceedings ArticleDOI
24 Oct 2016
TL;DR: This work develops a precise, scalable, and fully automated methodology to verify the probing security of masked algorithms, and generate them from unprotected descriptions of the algorithm.
Abstract: Differential power analysis (DPA) is a side-channel attack in which an adversary retrieves cryptographic material by measuring and analyzing the power consumption of the device on which the cryptographic algorithm under attack executes. An effective countermeasure against DPA is to mask secrets by probabilistically encoding them over a set of shares, and to run masked algorithms that compute on these encodings. Masked algorithms are often expected to provide, at least, a certain level of probing security. Leveraging the deep connections between probabilistic information flow and probing security, we develop a precise, scalable, and fully automated methodology to verify the probing security of masked algorithms, and generate them from unprotected descriptions of the algorithm. Our methodology relies on several contributions of independent interest, including a stronger notion of probing security that supports compositional reasoning, and a type system for enforcing an expressive class of probing policies. Finally, we validate our methodology on examples that go significantly beyond the state-of-the-art.

176 citations


Journal ArticleDOI
TL;DR: In this article, a high-fidelity preparation of a collective atomic excitation in a single correlated sub-radiant eigenmode in a lattice is proposed to capture the qualitative features of the dynamics and sharp transmission resonances.
Abstract: We show how strong light-mediated resonant dipole-dipole interactions between atoms can be utilized in a control and storage of light. The method is based on a high-fidelity preparation of a collective atomic excitation in a single correlated subradiant eigenmode in a lattice. We demonstrate how a simple phenomenological model captures the qualitative features of the dynamics and sharp transmission resonances that may find applications in sensing

146 citations


Journal ArticleDOI
TL;DR: In this article, a new control strategy for high voltage direct current (HVDC) transmission based on the synchronverter concept is presented, where the sending-end rectifier controls emulate a synchronous motor (SM), and the receiving end inverter emulates a synchronized generator (SG).
Abstract: This paper presents a new control strategy for high voltage direct current (HVDC) transmission based on the synchronverter concept: the sending-end rectifier controls emulate a synchronous motor (SM), and the receiving end inverter emulates a synchronous generator (SG). The two converters connected with a DC line provide what is called a synchronverter HVDC (SHVDC). The structure of the SHVDC is firstly analyzed. It is shown that the droop and voltage regulations included in the SHVDC structure are necessary and sufficient to well define the behavior of SHVDC. The standard parameters of the SG cannot be directly used for this structure. A specific tuning method of these parameters is proposed in order to satisfy the usual HVDC control requirements. The new tuning method is compared with the standard vector control in terms of local performances and fault critical clearing time (CCT) in the neighboring zone of the link. The test network is a 4-machine power system with parallel HVDC/AC transmission. The results indicate the contribution of the proposed controller to enhance the stability margin of the neighbor AC zone of the link.

135 citations


Journal ArticleDOI
TL;DR: Darbon et al. as mentioned in this paper used the classical Hopf formulas for solving initial value problems for HJ PDEs and showed that these formulas are polynomial in the dimension.
Abstract: It is well known that time-dependent Hamilton–Jacobi–Isaacs partial differential equations (HJ PDEs) play an important role in analyzing continuous dynamic games and control theory problems. An important tool for such problems when they involve geometric motion is the level set method (Osher and Sethian in J Comput Phys 79(1):12–49, 1988). This was first used for reachability problems in Mitchell et al. (IEEE Trans Autom Control 50(171):947–957, 2005) and Mitchell and Tomlin (J Sci Comput 19(1–3):323–346, 2003). The cost of these algorithms and, in fact, all PDE numerical approximations is exponential in the space dimension and time. In Darbon (SIAM J Imaging Sci 8(4):2268–2293, 2015), some connections between HJ PDE and convex optimization in many dimensions are presented. In this work, we propose and test methods for solving a large class of the HJ PDE relevant to optimal control problems without the use of grids or numerical approximations. Rather we use the classical Hopf formulas for solving initial value problems for HJ PDE (Hopf in J Math Mech 14:951–973, 1965). We have noticed that if the Hamiltonian is convex and positively homogeneous of degree one (which the latter is for all geometrically based level set motion and control and differential game problems) that very fast methods exist to solve the resulting optimization problem. This is very much related to fast methods for solving problems in compressive sensing, based on $$\ell _1$$ optimization (Goldstein and Osher in SIAM J Imaging Sci 2(2):323–343, 2009; Yin et al. in SIAM J Imaging Sci 1(1):143–168, 2008). We seem to obtain methods which are polynomial in the dimension. Our algorithm is very fast, requires very low memory and is totally parallelizable. We can evaluate the solution and its gradient in very high dimensions at $$10^{-4}$$ – $$10^{-8}$$ s per evaluation on a laptop. We carefully explain how to compute numerically the optimal control from the numerical solution of the associated initial valued HJ PDE for a class of optimal control problems. We show that our algorithms compute all the quantities we need to obtain easily the controller. In addition, as a step often needed in this procedure, we have developed a new and equally fast way to find, in very high dimensions, the closest point y lying in the union of a finite number of compact convex sets $$\Omega $$ to any point x exterior to the $$\Omega $$ . We can also compute the distance to these sets much faster than Dijkstra type “fast methods,” e.g., Dijkstra (Numer Math 1:269–271, 1959). The term “curse of dimensionality” was coined by Bellman (Adaptive control processes, a guided tour. Princeton University Press, Princeton, 1961; Dynamic programming. Princeton University Press, Princeton, 1957), when considering problems in dynamic optimization.

116 citations


Journal ArticleDOI
TL;DR: The device analysis suggests that the main concern is the moderate redox stability of the complexes under high applied driving currents, leading to devices with moderate stabilities pointing to a proof-of-concept for further development.
Abstract: This study presents the influence of various substituents on the photophysical features of heteroleptic copper(I) complexes bearing both N-heterocyclic carbene (NHC) and dipyridylamine (dpa = dipyridylamine skeleton corresponding to ligand L1) ligands. The luminescent properties have been compared to our recently reported archetypal blue emitting [Cu(IPr)(dpa)][PF6] complex. The choice of the substituents on both ligands has been guided to explore the effect of the electron donor/acceptor and "push-pull" on the emission wavelengths and photoluminescence quantum yields. A selection of the best candidates in terms of their photophysical features were applied for developing the first blue light-emitting electrochemical cells (LECs) based on copper(I) complexes. The device analysis suggests that the main concern is the moderate redox stability of the complexes under high applied driving currents, leading to devices with moderate stabilities pointing to a proof-of-concept for further development. Nevertheless, under low applied driving currents the blue emission is stable, showing performance levels competitive to those reported for blue LECs based on iridium(III) complexes. Overall, this work provides valuable guidelines to tackle the design of enhanced NHC copper complexes for lighting applications in the near future.

106 citations


Journal ArticleDOI
TL;DR: In this paper, a 14-year high resolution wave and wind hindcast was carried out for Ireland using WAVEWATCH III on an unstructured grid with resolution ranging between 10 km offshore and 225m in the nearshore, forced by the downscaled HARMONIE 10m winds and ERA-Interim wave spectra.

97 citations


Journal ArticleDOI
TL;DR: By exploiting the cavity feeding effect on the phonon wings, the emission of the nanotube was locked at the cavity resonance frequency, which allowed us to tune the frequency over a 4 THz band while keeping an almost perfect antibunching.
Abstract: The narrow emission of a single carbon nanotube at low temperature is coupled to the optical mode of a fiber microcavity using the built-in spatial and spectral matching brought by this flexible geometry. A thorough cw and time-resolved investigation of the very same emitter both in free space and in cavity shows an efficient funneling of the emission into the cavity mode together with a strong emission enhancement corresponding to a Purcell factor of up to 5. At the same time, the emitted photons retain a strong sub-Poissonian statistics. By exploiting the cavity feeding effect on the phonon wings, we locked the emission of the nanotube at the cavity resonance frequency, which allowed us to tune the frequency over a 4 THz band while keeping an almost perfect antibunching. By choosing the nanotube diameter appropriately, this study paves the way to the development of carbon-based tunable single-photon sources in the telecom bands.

94 citations


Journal ArticleDOI
TL;DR: A scanning quantum probe microscope which solves both issues by employing a nanospin ensemble hosted in a nanodiamond and provides up to an order of magnitude gain in acquisition time while preserving sub-100 nm spatial resolution both for the quantum sensor and topographic images.
Abstract: Quantum sensors based on solid-state spins provide tremendous opportunities in a wide range of fields from basic physics and chemistry to biomedical imaging. However, integrating them into a scanning probe microscope to enable practical, nanoscale quantum imaging is a highly challenging task. Recently, the use of single spins in diamond in conjunction with atomic force microscopy techniques has allowed significant progress toward this goal, but generalization of this approach has so far been impeded by long acquisition times or by the absence of simultaneous topographic information. Here, we report on a scanning quantum probe microscope which solves both issues by employing a nanospin ensemble hosted in a nanodiamond. This approach provides up to an order of magnitude gain in acquisition time while preserving sub-100 nm spatial resolution both for the quantum sensor and topographic images. We demonstrate two applications of this microscope. We first image nanoscale clusters of maghemite particles through ...

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the physics of the slamming process on bottom hinged Oscillating Wave Surge Converters (OWSCs) using a high speed camera and numerical results.

Posted Content
TL;DR: This work proposes and test methods for solving a large class of the HJ PDE relevant to optimal control problems without the use of grids or numerical approximations and develops a new and equally fast way to find the closest point y lying in the union of a finite number of compact convex sets.
Abstract: It is well known that time dependent Hamilton-Jacobi-Isaacs partial differential equations (HJ PDE), play an important role in analyzing continuous dynamic games and control theory problems. An important tool for such problems when they involve geometric motion is the level set method. This was first used for reachability problems. The cost of these algorithms, and, in fact, all PDE numerical approximations is exponential in the space dimension and time. In this work we propose and test methods for solving a large class of the HJ PDE relevant to optimal control problems without the use of grids or numerical approximations. Rather we use the classical Hopf formulas for solving initial value problems for HJ PDE. We have noticed that if the Hamiltonian is convex and positively homogeneous of degree one that very fast methods exist to solve the resulting optimization problem. This is very much related to fast methods for solving problems in compressive sensing, based on $\ell_1$ optimization. We seem to obtain methods which are polynomial in the dimension. Our algorithm is very fast, requires very low memory and is totally parallelizable. We can evaluate the solution and its gradient in very high dimensions at $10^{-4}$ to $10^{-8}$ seconds per evaluation on a laptop. We carefully explain how to compute numerically the optimal control from the numerical solution of the associated initial valued HJ-PDE for a class of optimal control problems. We show that our algorithms compute all the quantities we need to obtain easily the controller. The term curse of dimensionality, was coined by Richard Bellman in 1957 when considering problems in dynamic optimization.

Journal Article
TL;DR: This paper considers the problems of supervised classification and regression in the case where attributes and labels are functions: a data is represented by a set of functions, and the label is also a function.
Abstract: In this paper we consider the problems of supervised classification and regression in the case where attributes and labels are functions: a data is represented by a set of functions, and the label is also a function. We focus on the use of reproducing kernel Hilbert space theory to learn from such functional data. Basic concepts and properties of kernel-based learning are extended to include the estimation of function-valued functions. In this setting, the representer theorem is restated, a set of rigorously defined infinite-dimensional operator-valued kernels that can be valuably applied when the data are functions is described, and a learning algorithm for nonlinear functional data analysis is introduced. The methodology is illustrated through speech and audio signal processing experiments.

Journal ArticleDOI
TL;DR: 207Pb NMR appears to be a very promising tool for the characterisation of local order in mixed halogen hybrid perovskite lattices.
Abstract: We report on 207Pb, 79Br, 14N, 1H, 13C and 2H NMR experiments for studying the local order and dynamics in hybrid perovskite lattices. 207Pb NMR experiments conducted at room temperature on a series of MAPbX3 compounds (MA = CH3NH3+; X = Cl, Br and I) showed that the isotropic 207Pb NMR shift is strongly dependent on the nature of the halogen ions. Therefore 207Pb NMR appears to be a very promising tool for the characterisation of local order in mixed halogen hybrid perovskites. 207Pb NMR on MAPbBr2I served as a proof of concept. Proton, 13C and 14N NMR experiments confirmed the results previously reported in the literature. Low temperature deuterium NMR measurements, down to 25 K, were carried out to investigate the structural phase transitions of MAPbBr3. Spectral lineshapes allow following the successive phase transitions of MAPbBr3. Finally, quadrupolar NMR lineshapes recorded in the orthorhombic phase were compared with simulated spectra, using DFT calculated electric field gradients (EFG). Computed data do not take into account any temperature effect. Thus, the discrepancy between the calculated and experimental EFG evidences the fact that MA cations are still subject to significant dynamics, even at 25 K.

Journal ArticleDOI
TL;DR: This study encourages coaches to monitor s-IgA in routine, particularly during HI training periods, to take precautions to avoid upper respiratory tract infection in highly trained soccer players.
Abstract: Owen, AL, Wong, DP, Dunlop, G, Groussard, C, Kebsi, W, Dellal, A, Morgans, R, and Zouhal, H. High-intensity training and salivary immunoglobulin A responses in professional top-level soccer players: Effect of training intensity. J Strength Cond Res 30(9): 2460-2469, 2016-This study aimed (a) to test the hypothesis that salivary immunoglobulin A (s-IgA) would vary with training intensity sessions (low-intensity [LI] vs. high-intensity sessions [HI]) during a traditional training program divided into 4 training periods and (b) to identify key variables (e.g., GPS data, rating of perceived exertion [RPE], and training duration), which could affect s-IgA. Saliva samples of 10 elite professional soccer players were collected (a) before the investigation started to establish the baseline level and (b) before and after each 4 training sessions (LI vs. HI). Training intensity was monitored as internal (through heart rate responses and RPE) and external (through GPS) loads. High-intensity sessions were associated with higher external load (GPS) and with higher RPE. Baseline and pretraining s-IgA did not differ between the 4 training sessions both for HI and LI. Post-training s-IgA were not different (in absolute value and in percentage of change) between HI and LI sessions at the first 3 periods. However, at the fourth period, s-IgA concentration for HI session was significantly lower (p ≤ 0.05) than the LI session. The percentage change between s-IgA post-training and s-IgA baseline concentrations differ significantly (p ≤ 0.05) between HI and LI training sessions. Significant correlations between s-IgA and training intensity were also noted. High-intensity soccer training sessions might cause a significant decrease in s-IgA values during the postexercise window as compared with LI sessions. This study encourages coaches to monitor s-IgA in routine, particularly during HI training periods, to take precautions to avoid upper respiratory tract infection in highly trained soccer players.

Journal ArticleDOI
TL;DR: In this paper, an integrated extended finite element method (XFEM) and cohesive element (CE) method for three-dimensional (3D) delamination migration in multi-directional composite laminates, and validates the results with experiment performed on a double-cantilever beam (DCB).
Abstract: Progressive damage and failure in composites are generally complex and involve multiple interacting failure modes. Depending on factors such as lay-up sequence, loading and specimen configurations, failure may be dominated by extensive matrix crack-delamination interactions, which are very difficult to model accurately. The present study further develops an integrated extended finite element method (XFEM) and cohesive element (CE) method for three-dimensional (3D) delamination migration in multi-directional composite laminates, and validates the results with experiment performed on a double-cantilever beam (DCB). The plies are modeled by using XFEM brick elements, while the interfaces are modeled using CEs. The interaction between matrix crack and delamination is achieved by enriching the nodes of cohesive element. The mechanisms of matrix fracture and delamination migration are explained and discussed. Matrix crack initiation and propagation can be predicted and delamination migration is also observed in the results. The algorithm provides for the prediction of matrix crack angles through the ply thickness. The proposed method provides a platform for the realistic simulation of progressive failure of composite laminates.

Journal ArticleDOI
TL;DR: This work has obtained the first example of a self-organized monolayer (SOM) obtained using diazonium electroreduction, and it is an original way to obtain well-controlled and stable functionalized surfaces for potential applications related to the photophysical properties of the grafted chromophore.
Abstract: A new heteroleptic polypyridyle Ru(II) complex was synthesized and deposited on surface by the diazonium electroreduction process. It yields to the covalent grafting of a monolayer. The functionalized surface was characterized by XPS, electrochemistry, AFM, and STM. A precise organization of the molecules within the monolayer is observed with parallel linear stripes separated by a distance of 3.8 nm corresponding to the lateral size of the molecule. Such organization suggests a strong cooperative process in the deposition process. This strategy is an original way to obtain well-controlled and stable functionalized surfaces for potential applications related to the photophysical properties of the grafted chromophore. As an exciting result, it is the first example of a self-organized monolayer (SOM) obtained using diazonium electroreduction.

Journal ArticleDOI
TL;DR: The biology and pathogenesis of advanced systemic mastocytosis is reviewed, with a special focus on novel molecular findings as well as current and evolving therapeutic options.
Abstract: Systemic mastocytosis is a heterogeneous disease characterized by the accumulation of neoplastic mast cells in the bone marrow and other organ organs/tissues. Mutations in KIT, most frequently KIT D816V, are detected in over 80% of all systemic mastocytosis patients. While most systemic mastocytosis patients suffer from an indolent disease variant, some present with more aggressive variants, collectively called "advanced systemic mastocytosis", which include aggressive systemic mastocytosis, systemic mastocytosis with an associated hematologic, clonal non mast cell-lineage disease, and mast cell leukemia. Whereas patients with indolent systemic mastocytosis have a near normal life expectancy, patients with advanced systemic mastocytosis have a reduced life expectancy. Although cladribine and interferon-alpha are of benefit in a group of patients with advanced systemic mastocytosis, no curative therapy is available for these patients except possible allogeneic hematopoietic stem cell transplantation. Recent studies have also revealed additional somatic defects (apart from mutations in KIT) in a majority of patients with advanced systemic mastocytosis. These include TET2, SRSF2, ASXL1, RUNX1, JAK2, and/or RAS mutations, which may adversely impact prognosis and survival in particular systemic mastocytosis with an associated hematological neoplasm. In addition, several additional signaling molecules involved in the abnormal proliferation of mast cells in systemic mastocytosis have been identified. These advances have led to a better understanding of the biology of advanced systemic mastocytosis and to the development of new targeted treatment concepts. Herein, we review the biology and pathogenesis of advanced systemic mastocytosis, with a special focus on novel molecular findings as well as current and evolving therapeutic options.

Journal ArticleDOI
TL;DR: In this article, the power factor correction scheme for a single-phase on-board charger of electric vehicles is studied, where an inductor-capacitor (LC) input filter is employed.
Abstract: This paper aims to study the power factor (PF) correction scheme for a single-phase on-board charger of electric vehicles. The topology is based on a unidirectional current source active rectifier (CSAR) consisting of four insulated-gate bipolar transistors in series with four diodes followed by a boost converter. Buck-type rectifiers inject low-order input current harmonics into the ac mains. Thus, an inductor–capacitor ( LC ) input filter is employed. The capacitor's reactive energy results in a leading grid current. In order to achieve a unity displacement power factor, a phase shift control is implemented. However, the LC filter is prone to series and parallel resonances coming from the grid disturbances and the converter harmonics, respectively. Therefore, the phase shift control strategy combined with the topology of the CSAR results in a periodical resonance of the input filter. This phenomenon is studied in detail. In order to reduce the grid current's distortion level, an active damping control with resonance frequency tracking that achieves a good PF while meeting the IEC's international standards on harmonic current emissions is presented. An experimental test bench is developed to validate the simulations’ theoretical findings. Compliance with the standards is achieved and system limitations are discussed.

Book ChapterDOI
17 Jul 2016
TL;DR: In this paper, an approach is proposed to find a martingale, an expression on the program variables whose expectation remains invariant, and then apply the optional stopping theorem in order to infer properties at termination time.
Abstract: When analyzing probabilistic computations, a powerful approach is to first find a martingale—an expression on the program variables whose expectation remains invariant—and then apply the optional stopping theorem in order to infer properties at termination time. One of the main challenges, then, is to systematically find martingales.

Journal ArticleDOI
13 Jan 2016-PLOS ONE
TL;DR: The analysis reveals that three major US markets and two European markets did not exhibit critical slowing down prior to major financial crashes over the last century, and shows that a gradually increasing strength of stochastic perturbations may have caused to abrupt transitions in the financial markets.
Abstract: Complex systems inspired analysis suggests a hypothesis that financial meltdowns are abrupt critical transitions that occur when the system reaches a tipping point. Theoretical and empirical studies on climatic and ecological dynamical systems have shown that approach to tipping points is preceded by a generic phenomenon called critical slowing down, i.e. an increasingly slow response of the system to perturbations. Therefore, it has been suggested that critical slowing down may be used as an early warning signal of imminent critical transitions. Whether financial markets exhibit critical slowing down prior to meltdowns remains unclear. Here, our analysis reveals that three major US (Dow Jones Index, S&P 500 and NASDAQ) and two European markets (DAX and FTSE) did not exhibit critical slowing down prior to major financial crashes over the last century. However, all markets showed strong trends of rising variability, quantified by time series variance and spectral function at low frequencies, prior to crashes. These results suggest that financial crashes are not critical transitions that occur in the vicinity of a tipping point. Using a simple model, we argue that financial crashes are likely to be stochastic transitions which can occur even when the system is far away from the tipping point. Specifically, we show that a gradually increasing strength of stochastic perturbations may have caused to abrupt transitions in the financial markets. Broadly, our results highlight the importance of stochastically driven abrupt transitions in real world scenarios. Our study offers rising variability as a precursor of financial meltdowns albeit with a limitation that they may signal false alarms.

Journal ArticleDOI
TL;DR: In this paper, a methodology involving multiple optimization strategies is presented to arrive at the solution to the complex problem of optimization of the layouts of arrays of wave energy converters (WECs).

Journal ArticleDOI
TL;DR: In this article, the inhomogeneous Landau equation on the torus in the case of hard, maxwellian and moderately soft potentials was investigated and the authors proved exponential decay estimates for the associated semigroup and then used the linearized semigroup decay in order to construct solutions in a close-to-equilibrium setting.
Abstract: This work deals with the inhomogeneous Landau equation on the torus in the cases of hard, maxwellian and moderately soft potentials. We first investigate the linearized equation and we prove exponential decay estimates for the associated semigroup. We then turn to the nonlinear equation and we use the linearized semigroup decay in order to construct solutions in a close-to-equilibrium setting. Finally, we prove a exponential stability for such a solution, with a rate as close as we want to the optimal rate given by the semigroup decay.

Journal ArticleDOI
01 Aug 2016-Carbon
TL;DR: In this article, a polypyrrole (PPy)-graphene sheet nanocomposites have been synthesized after making reduced graphene react with tetrazine derivatives through inverse electron demand Diels-Alder reaction.

Journal ArticleDOI
TL;DR: In this paper, the Vlasov-HMF model was applied to a spatially homogeneous state with a linearized stability criterion (Penrose criterion), and it was shown that these solutions exhibit a scattering behavior to a modified state.
Abstract: We consider the Vlasov-HMF (Hamiltonian Mean-Field) model. We consider solutions starting in a small Sobolev neighborhood of a spatially homogeneous state satisfying a linearized stability criterion (Penrose criterion). We prove that these solutions exhibit a scattering behavior to a modified state, which implies a nonlinear Landau damping effect with polynomial rate of damping.

Journal ArticleDOI
10 Feb 2016-ACS Nano
TL;DR: On-surface synthesis unites the promises of molecular materials and of self-assembly, with the sturdiness of covalently bonded structures: an ideal scenario for future applications and the synthesis of functional extended nanowires by self- assembly is reported.
Abstract: The tunable properties of molecular materials place them among the favorites for a variety of future generation devices. In addition, to maintain the current trend of miniaturization of those devices, a departure from the present top-down production methods may soon be required and self-assembly appears among the most promising alternatives. On-surface synthesis unites the promises of molecular materials and of self-assembly, with the sturdiness of covalently bonded structures: an ideal scenario for future applications. Following this idea, we report the synthesis of functional extended nanowires by self-assembly. In particular, the products correspond to one-dimensional organic semiconductors. The uniaxial alignment provided by our substrate templates allows us to access with exquisite detail their electronic properties, including the full valence band dispersion, by combining local probes with spatial averaging techniques. We show how, by selectively doping the molecular precursors, the product’s energy...

Journal ArticleDOI
TL;DR: It is found that the red-emitting BODIPY fluorophores are sensitive to environmental temperature rather than to viscosity, thus suggesting a new prototype for a ‘molecular thermometer’.
Abstract: Viscosity variations in the microscopic world are of paramount importance for diffusion and reactions. In the last decade a new class of fluorescent probes for measuring viscosity has emerged termed ‘molecular rotors’, which allows quantitative mapping of viscosity in microscopically heterogeneous environments. Here we attempt to tune the absorption and emission of one such ‘molecular rotor’ based on the BODIPY fluorescent core into the red region of the spectrum, to allow better compatibility with the ‘tissue optical window’ and imaging of cells and tissues. We consequently find that our red-emitting BODIPY fluorophores are sensitive to environmental temperature rather than to viscosity, thus suggesting a new prototype for a ‘molecular thermometer’.

Journal ArticleDOI
TL;DR: In this paper, a new ion source concept (Cybele source) which is based on a magnetized plasma column is presented. But the main challenge of this new injector concept is the achievement of a very high power photon flux which could be provided by 3MWFabry-Perot optical cavities implanted along the 1 MeVD−============ beam in the neutralizer stage.
Abstract: In parallel to the developments dedicated to the ITER neutral beam (NB) system, CEA-IRFM with laboratories in France and Switzerland are studying the feasibility of a new generation ofNBsystem able to provide heating and current drive for the future DEMOnstration fusion reactor. For the steadystate scenario, theNBsystem will have to provide a highNBpower level with a high wall-plug efficiency (η∼60%). Neutralization of the energetic negative ions by photodetachment (so called photoneutralization), if feasible, appears to be the ideal solution to meet these performances, in the sense that it could offer a high beam neutralization rate (>80%) and a wall-plug efficiency higher than 60%. The main challenge of this new injector concept is the achievement of a very high power photon flux which could be provided by 3MWFabry–Perot optical cavities implanted along the 1 MeVD− beam in the neutralizer stage. The beamline topology is tall and narrow to provide laminar ion beam sheets, which will be entirely illuminated by the intra-cavity photon beams propagating along the vertical axis. The paper describes the presentR&D(experiments and modelling) addressing the development of a new ion source concept (Cybele source) which is based on a magnetized plasma column. Parametric studies of the source are performed using Langmuir probes in order to characterize and compare the plasma parameters in the source column with different plasma generators, such as filamented cathodes, radio-frequency driver and a helicon antenna specifically developed at SPC-EPFL satisfying the requirements for the Cybele (axial magnetic field of 10 mT, source operating pressure: 0.3 Pa in hydrogen or deuterium). The paper compares the performances of the three plasma generators. It is shown that the helicon plasma generator is a very promising candidate to provide an intense and uniform negative ion beam sheet.

Journal ArticleDOI
TL;DR: In this paper, a theoretical underpinning to the recent experimental observation of the analog Hawking effect in a BEC system, while also raising some relevant issues that need to be addressed to make the observation entirely reliable.
Abstract: This paper provides a theoretical underpinning to the recent experimental observation of the analog Hawking effect in a BEC system, while also raising some relevant issues that need to be addressed to make the observation entirely reliable. Additionally, the paper unveils a new effect through the density two point function, which will certainly trigger new work in the active field of analog gravity.