scispace - formally typeset
Search or ask a question

Showing papers by "École Normale Supérieure published in 2013"


Journal ArticleDOI
TL;DR: The effects of subunit composition on NMDAR properties, synaptic plasticity and cellular mechanisms implicated in neuropsychiatric disorders are reviewed and could provide new therapeutic strategies against dysfunctions of glutamatergic transmission.
Abstract: NMDA receptors (NMDARs) are glutamate-gated ion channels and are crucial for neuronal communication. NMDARs form tetrameric complexes that consist of several homologous subunits. The subunit composition of NMDARs is plastic, resulting in a large number of receptor subtypes. As each receptor subtype has distinct biophysical, pharmacological and signalling properties, there is great interest in determining whether individual subtypes carry out specific functions in the CNS in both normal and pathological conditions. Here, we review the effects of subunit composition on NMDAR properties, synaptic plasticity and cellular mechanisms implicated in neuropsychiatric disorders. Understanding the rules and roles of NMDAR diversity could provide new therapeutic strategies against dysfunctions of glutamatergic transmission.

1,918 citations


Journal ArticleDOI
TL;DR: The mathematical analysis of wavelet scattering networks explains important properties of deep convolution networks for classification.
Abstract: A wavelet scattering network computes a translation invariant image representation which is stable to deformations and preserves high-frequency information for classification. It cascades wavelet transform convolutions with nonlinear modulus and averaging operators. The first network layer outputs SIFT-type descriptors, whereas the next layers provide complementary invariant information that improves classification. The mathematical analysis of wavelet scattering networks explains important properties of deep convolution networks for classification. A scattering representation of stationary processes incorporates higher order moments and can thus discriminate textures having the same Fourier power spectrum. State-of-the-art classification results are obtained for handwritten digits and texture discrimination, with a Gaussian kernel SVM and a generative PCA classifier.

1,337 citations


Posted Content
TL;DR: The Handbook of Income Distribution as mentioned in this paper summarizes the literature on equality of opportunity and provides evidence of population views from surveys and experiments concerning conceptions of equality, summarizing the empirical literature on inequality of opportunity to date.
Abstract: This forthcoming chapter in the Handbook of Income Distribution (eds., A. Atkinson and F. Bourguignon) summarizes the literature on equality of opportunity. We begin by reviewing the philosophical debate concerning equality since Rawls (sections 1 and 2), present economic algorithms for computing policies which equalize opportunities, or, more generally, ways of ordering social policies with respect to their efficacy in opportunity equalization (sections 3, 4 and 5), apply the approach to the conceptualization of economic development (section 6), discuss dynamic issues (section 7), give a preamble to a discussion of empirical work (section 8), provide evidence of population views from surveys and experiments concerning conceptions of equality (section 9), and a discuss measurement issues, summarizing the empirical literature on inequality of opportunity to date (section 10). We conclude with mention of some critiques of the equal-opportunity approach, and some predictions (section 11).

1,182 citations


Journal ArticleDOI
TL;DR: The ring-LWE distribution is pseudorandom as discussed by the authors, assuming that worst-case problems on ideal lattices are hard for polynomial-time quantum algorithms, which is not the case.
Abstract: The “learning with errors” (LWE) problem is to distinguish random linear equations, which have been perturbed by a small amount of noise, from truly uniform ones. The problem has been shown to be as hard as worst-case lattice problems, and in recent years it has served as the foundation for a plethora of cryptographic applications. Unfortunately, these applications are rather inefficient due to an inherent quadratic overhead in the use of LWE. A main open question was whether LWE and its applications could be made truly efficient by exploiting extra algebraic structure, as was done for lattice-based hash functions (and related primitives).We resolve this question in the affirmative by introducing an algebraic variant of LWE called ring-LWE, and proving that it too enjoys very strong hardness guarantees. Specifically, we show that the ring-LWE distribution is pseudorandom, assuming that worst-case problems on ideal lattices are hard for polynomial-time quantum algorithms. Applications include the first truly practical lattice-based public-key cryptosystem with an efficient security reduction; moreover, many of the other applications of LWE can be made much more efficient through the use of ring-LWE.

1,114 citations


Journal ArticleDOI
TL;DR: The technical part of these Guidelines and Recommendations provides an introduction to the physical principles and technology on which all forms of current commercially available ultrasound elastography are based.
Abstract: The technical part of these Guidelines and Recommendations, produced under the auspices of EFSUMB, provides an introduction to the physical principles and technology on which all forms of current commercially available ultrasound elastography are based. A difference in shear modulus is the common underlying physical mechanism that provides tissue contrast in all elastograms. The relationship between the alternative technologies is considered in terms of the method used to take advantage of this. The practical advantages and disadvantages associated with each of the techniques are described, and guidance is provided on optimisation of scanning technique, image display, image interpretation and some of the known image artefacts.

1,020 citations


Journal ArticleDOI
TL;DR: The clinical part of these Guidelines and Recommendations produced under the auspices of the European Federation of Societies for Ultrasound in Medicine and Biology EFSUMB assesses the clinically used applications of all forms of elastography, stressing the evidence from meta-analyses and giving practical advice for their uses and interpretation.
Abstract: The clinical part of these Guidelines and Recommendations produced under the auspices of the European Federation of Societies for Ultrasound in Medicine and Biology EFSUMB assesses the clinically used applications of all forms of elastography, stressing the evidence from meta-analyses and giving practical advice for their uses and interpretation. Diffuse liver disease forms the largest section, reflecting the wide experience with transient and shear wave elastography . Then follow the breast, thyroid, gastro-intestinal tract, endoscopic elastography, the prostate and the musculo-skeletal system using strain and shear wave elastography as appropriate. The document is intended to form a reference and to guide clinical users in a practical way.

830 citations


Journal ArticleDOI
TL;DR: The engineering of an organic electrochemical transistor embedded in an ultrathin organic film designed to record electrophysiological signals on the surface of the brain with superior signal-to-noise ratio is demonstrated.
Abstract: In vivo electrophysiological recordings of neuronal circuits are necessary for diagnostic purposes and for brain-machine interfaces. Organic electronic devices constitute a promising candidate because of their mechanical flexibility and biocompatibility. Here we demonstrate the engineering of an organic electrochemical transistor embedded in an ultrathin organic film designed to record electrophysiological signals on the surface of the brain. The device, tested in vivo on epileptiform discharges, displayed superior signal-to-noise ratio due to local amplification compared with surface electrodes. The organic transistor was able to record on the surface low-amplitude brain activities, which were poorly resolved with surface electrodes. This study introduces a new class of biocompatible, highly flexible devices for recording brain activity with superior signal-to-noise ratio that hold great promise for medical applications.

761 citations


Posted Content
TL;DR: In this paper, the stochastic average gradient (SAG) method was proposed to optimize the sum of a finite number of smooth convex functions, which achieves a faster convergence rate than black-box SG methods.
Abstract: We propose the stochastic average gradient (SAG) method for optimizing the sum of a finite number of smooth convex functions. Like stochastic gradient (SG) methods, the SAG method's iteration cost is independent of the number of terms in the sum. However, by incorporating a memory of previous gradient values the SAG method achieves a faster convergence rate than black-box SG methods. The convergence rate is improved from O(1/k^{1/2}) to O(1/k) in general, and when the sum is strongly-convex the convergence rate is improved from the sub-linear O(1/k) to a linear convergence rate of the form O(p^k) for p \textless{} 1. Further, in many cases the convergence rate of the new method is also faster than black-box deterministic gradient methods, in terms of the number of gradient evaluations. Numerical experiments indicate that the new algorithm often dramatically outperforms existing SG and deterministic gradient methods, and that the performance may be further improved through the use of non-uniform sampling strategies.

744 citations


Journal ArticleDOI
TL;DR: A way of encoding sparse data using a “nonbacktracking” matrix, and it is shown that the corresponding spectral algorithm performs optimally for some popular generative models, including the stochastic block model.
Abstract: Spectral algorithms are classic approaches to clustering and community detection in networks. However, for sparse networks the standard versions of these algorithms are suboptimal, in some cases completely failing to detect communities even when other algorithms such as belief propagation can do so. Here, we present a class of spectral algorithms based on a nonbacktracking walk on the directed edges of the graph. The spectrum of this operator is much better-behaved than that of the adjacency matrix or other commonly used matrices, maintaining a strong separation between the bulk eigenvalues and the eigenvalues relevant to community structure even in the sparse case. We show that our algorithm is optimal for graphs generated by the stochastic block model, detecting communities all of the way down to the theoretical limit. We also show the spectrum of the nonbacktracking operator for some real-world networks, illustrating its advantages over traditional spectral clustering.

702 citations


Proceedings ArticleDOI
13 May 2013
TL;DR: This work proposes a novel factorization technique that relies on partitioning a graph so as to minimize the number of neighboring vertices rather than edges across partitions, and decomposition is based on a streaming algorithm.
Abstract: Natural graphs, such as social networks, email graphs, or instant messaging patterns, have become pervasive through the internet. These graphs are massive, often containing hundreds of millions of nodes and billions of edges. While some theoretical models have been proposed to study such graphs, their analysis is still difficult due to the scale and nature of the data. We propose a framework for large-scale graph decomposition and inference. To resolve the scale, our framework is distributed so that the data are partitioned over a shared-nothing set of machines. We propose a novel factorization technique that relies on partitioning a graph so as to minimize the number of neighboring vertices rather than edges across partitions. Our decomposition is based on a streaming algorithm. It is network-aware as it adapts to the network topology of the underlying computational hardware. We use local copies of the variables and an efficient asynchronous communication protocol to synchronize the replicated values in order to perform most of the computation without having to incur the cost of network communication. On a graph of 200 million vertices and 10 billion edges, derived from an email communication network, our algorithm retains convergence properties while allowing for almost linear scalability in the number of computers.

655 citations


Journal ArticleDOI
TL;DR: Cappellari et al. as mentioned in this paper constructed detailed axisymmetric dynamical models (Jeans Anisotropic MGE), which allow for orbital anisotropy, include a dark matter halo and reproduce in detail both the galaxy images and the high-quality integral-field stellar kinematics out to about 1R(e), the projected half-light radius.
Abstract: We study the volume-limited and nearly mass-selected (stellar mass M-stars greater than or similar to 6 x 10(9) M circle dot) ATLAS(3D) sample of 260 early-type galaxies (ETGs, ellipticals Es and lenticulars S0s). We construct detailed axisymmetric dynamical models (Jeans Anisotropic MGE), which allow for orbital anisotropy, include a dark matter halo and reproduce in detail both the galaxy images and the high-quality integral-field stellar kinematics out to about 1R(e), the projected half-light radius. We derive accurate total mass-to-light ratios (M/L)(e) and dark matter fractions f(DM), within a sphere of radius centred on the galaxies. We also measure the stellar (M/L)(stars) and derive a median dark matter fraction f(DM) = 13 per cent in our sample. We infer masses M-JAM equivalent to L x (M/L)(e) approximate to 2 x M-1/2, where M-1/2 is the total mass within a sphere enclosing half of the galaxy light. We find that the thin two-dimensional subset spanned by galaxies in the (M-JAM, sigma(e), R-e(maj)) coordinates system, which we call the Mass Plane (MP) has an observed rms scatter of 19 per cent, which implies an intrinsic one of 11 per cent. Here, is the major axis of an isophote enclosing half of the observed galaxy light, while Sigma(e) is measured within that isophote. The MP satisfies the scalar virial relation M-JAM proportional to sigma R-2(e)e(maj) within our tight errors. This show that the larger scatter in the Fundamental Plane (FP) (L, Sigma(e), R-e) is due to stellar population effects [including trends in the stellar initial mass function (IMF)]. It confirms that the FP deviation from the virial exponents is due to a genuine (M/L)(e) variation. However, the details of how both R-e and Sigma(e) are determined are critical in defining the precise deviation from the virial exponents. The main uncertainty in masses or M/L estimates using the scalar virial relation is in the measurement of R-e. This problem is already relevant for nearby galaxies and may cause significant biases in virial mass and size determinations at high redshift. Dynamical models can eliminate these problems. We revisit the (M/L)(e)-Sigma(e) relation, which describes most of the deviations between the MP and the FP. The best-fitting relation is (M/L)(e) sigma(0.72)(e) (r band). It provides an upper limit to any systematic increase of the IMF mass normalization with Sigma(e). The correlation is more shallow and has smaller scatter for slow rotating systems or for galaxies in Virgo. For the latter, when using the best distance estimates, we observe a scatter in (M/L)(e) of 11 per cent, and infer an intrinsic one of 8 per cent. We perform an accurate empirical study of the link between Sigma(e) and the galaxies circular velocity V-circ within 1R(e) (where stars dominate) and find the relation max (V-circ) approximate to 1.76 x Sigma(e), which has an observed scatter of 7 per cent. The accurate parameters described in this paper are used in the companion Paper XX (Cappellari et al.) of this series to explore the variation of global galaxy properties, including the IMF, on the projections of the MP.

Journal ArticleDOI
TL;DR: In this article, the authors derived accurate total mass-to-light ratios (M/L) approximate to (m/L)(r = R-e) within a sphere of radius r = r-e centred on the galaxy, as well as stellar (M /L)(stars) (with the dark matter removed) for the volume-limited and nearly mass-selected (stellar mass M-star greater than or similar to 6 x 10(9) M-circle dot) ATLAS(3D) sample of 260 early-type galaxies (ETGs
Abstract: In the companion Paper XV of this series, we derive accurate total mass-to-light ratios (M/L)(JAM) approximate to (M/L)(r = R-e) within a sphere of radius r = R-e centred on the galaxy, as well as stellar (M/L)(stars) (with the dark matter removed) for the volume-limited and nearly mass-selected (stellar mass M-star greater than or similar to 6 x 10(9) M-circle dot) ATLAS(3D) sample of 260 early-type galaxies (ETGs, ellipticals Es and lenticulars S0s). Here, we use those parameters to study the two orthogonal projections (M-JAM, sigma(e)) and (M-JAM, R-e(maj)) of the thin Mass Plane (MP) (M-JAM, sigma(e), R-e(maj)) which describes the distribution of the galaxy population, where M-JAM = L x (M/L)(JAM) approximate to M-star. The distribution of galaxy properties on both projections of the MP is characterized by: (i) the same zone of exclusion (ZOE), which can be transformed from one projection to the other using the scalar virial equation. The ZOE is roughly described by two power laws, joined by a break at a characteristic mass M-JAM approximate to 3 x 10(10) M-circle dot, which corresponds to the minimum R-e and maximum stellar density. This results in a break in the mean M-JAM-sigma(e) relation with trends M-JAM proportional to sigma(2.3)(e) and M-JAM proportional to sigma(4.7)(e) at small and large sigma(e), respectively; (ii) a characteristic mass M-JAM approximate to 2 x 10(11) M-circle dot which separates a population dominated by flat fast rotator with discs and spiral galaxies at lower masses, from one dominated by quite round slow rotators at larger masses; (iii) below that mass the distribution of ETGs' properties on the two projections of the MP tends to be constant along lines of roughly constant sigma(e), or equivalently along lines with R-e(maj) proportional to M-JAM, respectively (or even better parallel to the ZOE: R-maj(e) proportional to M-JAM(0.75)); (iv) it forms a continuous and parallel sequence with the distribution of spiral galaxies; (v) at even lower masses, the distribution of fast-rotator ETGs and late spirals naturally extends to that of dwarf ETGs (Sph) and dwarf irregulars (Im), respectively. We use dynamical models to analyse our kinematic maps. We show that Sigma(e) traces the bulge fraction, which appears to be the main driver for the observed trends in the dynamical (M/L)(JAM) and in indicators of the (M/L)(pop) of the stellar population like H beta and colour, as well as in the molecular gas fraction. A similar variation along contours of Sigma(e) is also observed for the mass normalization of the stellar initial mass function (IMF), which was recently shown to vary systematically within the ETGs' population. Our preferred relation has the form log(10)[(M/L)(stars)/(M/L)(Salp)] = a + b x log(10)(sigma(e)/130 km s(-1)) with a = -0.12 +/- 0.01 and b = 0.35 +/- 0.06. Unless there are major flaws in all stellar population models, this trend implies a transition of the mean IMF from Kroupa to Salpeter in the interval log(10)(sigma(e)/km s(-1)) approximate to 1.9-2.5 (or sigma e approximate to 90-290 km s-1), with a smooth variation in between, consistently with what was shown in Cappellari et al. The observed d205 (or sigma e istribution of galaxy properties on the MP provides a clean and novel view for a number of previously reported trends, which constitute special two-dimensional projections of the more general four-dimensional parameters trends on the MP. We interpret it as due to a combination of two main effects: (i) an increase of the bulge fraction, which increases Sigma(e), decreases R-e, and greatly enhance the likelihood for a galaxy to have its star formation quenched, and (ii) dry merging, increasing galaxy mass and R-e by moving galaxies along lines of roughly constant Sigma(e) (or steeper), while leaving the population nearly unchanged.

Journal ArticleDOI
01 Apr 2013-Gut
TL;DR: Altered BA transformation in the gut lumen can erase the anti-inflammatory effects of some BA species on gut epithelial cells and could participate in the chronic inflammation loop of IBD.
Abstract: Objective Gut microbiota metabolises bile acids (BA). As dysbiosis has been reported in inflammatory bowel diseases (IBD), we aim to investigate the impact of IBD-associated dysbiosis on BA metabolism and its influence on the epithelial cell inflammation response. Design Faecal and serum BA rates, expressed as a proportion of total BA, were assessed by high-performance liquid chromatography tandem mass spectrometry in colonic IBD patients (42) and healthy subjects (29). The faecal microbiota composition was assessed by quantitative real-time PCR. Using BA profiles and microbiota composition, cluster formation between groups was generated by ranking models. The faecal BA profiles in germ-free and conventional mice were compared. Direct enzymatic activities of BA biotransformation were measured in faeces. The impact of BA on the inflammatory response was investigated in vitro using Caco-2 cells stimulated by IL-1β. Results IBD-associated dysbiosis was characterised by a decrease in the ratio between Faecalibacterium prausntizii and Escherichia coli . Faecal-conjugated BA rates were significantly higher in active IBD, whereas, secondary BA rates were significantly lower. Interestingly, active IBD patients exhibited higher levels of faecal 3-OH-sulphated BA. The deconjugation, transformation and desulphation activities of the microbiota were impaired in IBD patients. In vitro, secondary BA exerted anti-inflammatory effects, but sulphation of secondary BAs abolished their anti-inflammatory properties. Conclusions Impaired microbiota enzymatic activity observed in IBD-associated dysbiosis leads to modifications in the luminal BA pool composition. Altered BA transformation in the gut lumen can erase the anti-inflammatory effects of some BA species on gut epithelial cells and could participate in the chronic inflammation loop of IBD.

Journal ArticleDOI
TL;DR: This work presents organic electrochemical transistors with a transconductance in the mS range, outperforming transistors from both traditional and emerging semiconductors.
Abstract: The development of transistors with high gain is essential for applications ranging from switching elements and drivers to transducers for chemical and biological sensing. Organic transistors have become well-established based on their distinct advantages, including ease of fabrication, synthetic freedom for chemical functionalization, and the ability to take on unique form factors. These devices, however, are largely viewed as belonging to the low-end of the performance spectrum. Here we present organic electrochemical transistors with a transconductance in the mS range, outperforming transistors from both traditional and emerging semiconductors. The transconductance of these devices remains fairly constant from DC up to a frequency of the order of 1 kHz, a value determined by the process of ion transport between the electrolyte and the channel. These devices, which continue to work even after being crumpled, are predicted to be highly relevant as transducers in biosensing applications.

Journal ArticleDOI
TL;DR: The poroelastic model is directly validated to explain cellular rheology at physiologically relevant timescales using microindentation tests in conjunction with mechanical, chemical and genetic treatments and shows that water redistribution through the solid phase of the cytoplasm (cytoskeleton and macromolecular crowders) plays a fundamental role in setting cellularRheology.
Abstract: The cytoplasm is the largest part of the cell by volume and hence its rheology sets the rate at which cellular shape changes can occur. Recent experimental evidence suggests that cytoplasmic rheology can be described by a poroelastic model, in which the cytoplasm is treated as a biphasic material consisting of a porous elastic solid meshwork (cytoskeleton, organelles, macromolecules) bathed in an interstitial fluid (cytosol). In this picture, the rate of cellular deformation is limited by the rate at which intracellular water can redistribute within the cytoplasm. However, direct supporting evidence for the model is lacking. Here we directly validate the poroelastic model to explain cellular rheology at physiologically relevant timescales using microindentation tests in conjunction with mechanical, chemical and genetic treatments. Our results show that water redistribution through the solid phase of the cytoplasm (cytoskeleton and macromolecular crowders) plays a fundamental role in setting cellular rheology.

Book ChapterDOI
18 Aug 2013
TL;DR: In this article, a lattice-based digital signature scheme was proposed that represents an improvement, both in theory and in practice, over today's most efficient lattice primitives.
Abstract: Our main result is a construction of a lattice-based digital signature scheme that represents an improvement, both in theory and in practice, over today’s most efficient lattice schemes. The novel scheme is obtained as a result of a modification of the rejection sampling algorithm that is at the heart of Lyubashevsky’s signature scheme (Eurocrypt, 2012) and several other lattice primitives. Our new rejection sampling algorithm which samples from a bimodal Gaussian distribution, combined with a modified scheme instantiation, ends up reducing the standard deviation of the resulting signatures by a factor that is asymptotically square root in the security parameter. The implementations of our signature scheme for security levels of 128, 160, and 192 bits compare very favorably to existing schemes such as RSA and ECDSA in terms of efficiency. In addition, the new scheme has shorter signature and public key sizes than all previously proposed lattice signature schemes.

Journal ArticleDOI
TL;DR: The findings indicate that “Just-In-Time Quality Assurance” may provide an effort-reducing way to focus on the most risky changes and thus reduce the costs of developing high-quality software.
Abstract: Defect prediction models are a well-known technique for identifying defect-prone files or packages such that practitioners can allocate their quality assurance efforts (e.g., testing and code reviews). However, once the critical files or packages have been identified, developers still need to spend considerable time drilling down to the functions or even code snippets that should be reviewed or tested. This makes the approach too time consuming and impractical for large software systems. Instead, we consider defect prediction models that focus on identifying defect-prone (“risky”) software changes instead of files or packages. We refer to this type of quality assurance activity as “Just-In-Time Quality Assurance,” because developers can review and test these risky changes while they are still fresh in their minds (i.e., at check-in time). To build a change risk model, we use a wide range of factors based on the characteristics of a software change, such as the number of added lines, and developer experience. A large-scale study of six open source and five commercial projects from multiple domains shows that our models can predict whether or not a change will lead to a defect with an average accuracy of 68 percent and an average recall of 64 percent. Furthermore, when considering the effort needed to review changes, we find that using only 20 percent of the effort it would take to inspect all changes, we can identify 35 percent of all defect-inducing changes. Our findings indicate that “Just-In-Time Quality Assurance” may provide an effort-reducing way to focus on the most risky changes and thus reduce the costs of developing high-quality software.

Proceedings ArticleDOI
23 Jun 2013
TL;DR: An affine invariant representation is constructed with a cascade of invariants, which preserves information for classification and state-of-the-art classification results are obtained over texture databases with uncontrolled viewing conditions.
Abstract: An affine invariant representation is constructed with a cascade of invariants, which preserves information for classification. A joint translation and rotation invariant representation of image patches is calculated with a scattering transform. It is implemented with a deep convolution network, which computes successive wavelet transforms and modulus non-linearities. Invariants to scaling, shearing and small deformations are calculated with linear operators in the scattering domain. State-of-the-art classification results are obtained over texture databases with uncontrolled viewing conditions.

Book
08 May 2013
TL;DR: In this paper, the authors provide a comprehensive and self-contained study on the theory of water waves equations, a research area that has been very active in recent years, and propose a simple and robust framework for studying these questions.
Abstract: This monograph provides a comprehensive and self-contained study on the theory of water waves equations, a research area that has been very active in recent years. The vast literature devoted to the study of water waves offers numerous asymptotic models. Which model provides the best description of waves such as tsunamis or tidal waves? How can water waves equations be transformed into simpler asymptotic models for applications in, for example, coastal oceanography? This book proposes a simple and robust framework for studying these questions. The book should be of interest to graduate students and researchers looking for an introduction to water waves equations or for simple asymptotic models to describe the propagation of waves. Researchers working on the mathematical analysis of nonlinear dispersive equations may also find inspiration in the many (and sometimes new) models derived here, as well as precise information on their physical relevance.

Journal ArticleDOI
TL;DR: The experiments performed with this ''photon box'' at Ecole Normale Superieure (ENS) belong to the domain of quantum optics called ''cavity quantum electrodynamics'' as discussed by the authors, and have led to the demonstration of basic steps in quantum information processing, including the deterministic entanglement of atoms and the realization of quantum gates using atoms and photons as quantum bits.
Abstract: Microwave photons trapped in a superconducting cavity constitute an ideal system to realize some of the thought experiments imagined by the founding fathers of quantum physics. The interaction of these trapped photons with Rydberg atoms crossing the cavity illustrates fundamental aspects of measurement theory. The experiments performed with this ``photon box'' at Ecole Normale Sup\'erieure (ENS) belong to the domain of quantum optics called ``cavity quantum electrodynamics.'' We have realized the nondestructive counting of photons, the recording of field quantum jumps, the preparation and reconstruction of ``Schr\"odinger cat'' states of radiation and the study of their decoherence, which provides a striking illustration of the transition from the quantum to the classical world. These experiments have also led to the demonstration of basic steps in quantum information processing, including the deterministic entanglement of atoms and the realization of quantum gates using atoms and photons as quantum bits. This lecture starts by an introduction stressing the connection between the ENS photon box and the ion-trap experiments of David Wineland, whose accompanying lecture recalls his own contribution to the field of single particle control. I give then a personal account of the early days of cavity quantum electrodynamics before describing the main experiments performed at ENS during the last 20 years and concluding by a discussion comparing our work to other researches dealing with the control of single quantum particles.

Journal ArticleDOI
TL;DR: In this article, the authors reanalysed 31 primary data sets as a single large sample (N = 2876) to provide a more definitive view of the association between bipolar disorder and cognitive impairment.
Abstract: Objective: An association between bipolar disorder and cognitive impairment has repeatedly been described, even for euthymic patients. Findings are inconsistent both across primary studies and previous meta-analyses. This study reanalysed 31 primary data sets as a single large sample (N = 2876) to provide a more definitive view. Method: Individual patient and control data were obtained from original authors for 11 measures from four common neuropsychological tests: California or Rey Verbal Learning Task (VLT), Trail Making Test (TMT), Digit Span and/or Wisconsin Card Sorting Task. Results: Impairments were found for all 11 test-measures in the bipolar group after controlling for age, IQ and gender (Ps ≤ 0.001, E.S. = 0.26-0.63). Residual mood symptoms confound this result but cannot account for the effect sizes found. Impairments also seem unrelated to drug treatment. Some test-measures were weakly correlated with illness severity measures suggesting that some impairments may track illness progression. Conclusion: This reanalysis supports VLT, Digit Span and TMT as robust measures of cognitive impairments in bipolar disorder patients. The heterogeneity of some test results explains previous differences in meta-analyses. Better controlling for confounds suggests deficits may be smaller than previously reported but should be tracked longitudinally across illness progression and treatment.

Journal ArticleDOI
TL;DR: The paper presents an efficient Hybrid Genetic Search with Advanced Diversity Control for a large class of time-constrained vehicle routing problems, introducing several new features to manage the temporal dimension.

Journal ArticleDOI
TL;DR: The optimal perimeter control for two-region urban cities is formulated with the use of MFDs and results show that the performances of the model predictive control are significantly better than a “greedy” feedback control.
Abstract: Recent analysis of empirical data from cities showed that a macroscopic fundamental diagram (MFD) of urban traffic provides for homogenous network regions a unimodal low-scatter relationship between network vehicle density and network space-mean flow. In this paper, the optimal perimeter control for two-region urban cities is formulated with the use of MFDs. The controllers operate on the border between the two regions and manipulate the percentages of flows that transfer between the two regions such that the number of trips that reach their destinations is maximized. The optimal perimeter control problem is solved by model predictive control, where the prediction model and the plant (reality) are formulated by MFDs. Examples are presented for different levels of congestion in the regions of the city and the robustness of the controller is tested for different sizes of error in the MFDs and different levels of noise in the traffic demand. Moreover, two methods for smoothing the control sequences are presented. Comparison results show that the performances of the model predictive control are significantly better than a “greedy” feedback control. The results in this paper can be extended to develop efficient hierarchical control strategies for heterogeneously congested cities.

Book
01 Jun 2013
TL;DR: In this article, the granular solid: statics and elasticity of granular liquid and granular gases are discussed at the grain level, and the interaction between granular media, statics, elasticity and plasticity at the liquid level.
Abstract: Foreword 1. Introduction 2. Interactions at the grain level 3. The granular solid: statics and elasticity 4. The granular solid: plasticity 5. Granular gases 6. The granular liquid 7. Immersed granular media 8. Erosion and sediment transport 9. Geomorphology References Index.

Proceedings ArticleDOI
21 Jul 2013
TL;DR: In this article, the authors present a novel average value model (AVM) for efficient and accurate representation of a detailed MMC-HVDC system and also develop a detailed 401-level MMC HVDC model for validating the AVM and studies the performance of both models when integrated into a large 400 kV transmission system in Europe.
Abstract: Summary form only given. Voltage Source Converter (VSC) technologies present a bright opportunity in a variety of fields within the power system industry. New Modular Multilevel Converters (MMCs) are expected to supersede two- and three-level VSC-based technologies for HVDC applications due to their recognized advantages in terms of scalability, performance and efficiency. Computational burden introduced by detailed modeling of MMC-HVDC systems in EMT-type programs complicates the study of transients especially when such systems are integrated into a large network. This paper presents a novel average-value model (AVM) for efficient and accurate representation of a detailed MMC-HVDC system. It also develops a detailed 401-level MMC-HVDC model for validating the AVM and studies the performance of both models when integrated into a large 400 kV transmission system in Europe. The results show that the AVM is significantly more efficient while maintaining its accuracy for the dynamic response of the overall system.

Journal ArticleDOI
09 Aug 2013-Science
TL;DR: A quantitative single-cell approach to characterize protein spatiotemporal organization, with single-molecule sensitivity in live eukaryotic cells, suggests that transient crowding of enzymes may aid in rate-limiting steps of gene regulation.
Abstract: Transcription is reported to be spatially compartmentalized in nuclear transcription factories with clusters of RNA polymerase II (Pol II). However, little is known about when these foci assemble or their relative stability. We developed a quantitative single-cell approach to characterize protein spatiotemporal organization, with single-molecule sensitivity in live eukaryotic cells. We observed that Pol II clusters form transiently, with an average lifetime of 5.1 (± 0.4) seconds, which refutes the notion that they are statically assembled substructures. Stimuli affecting transcription yielded orders-of-magnitude changes in the dynamics of Pol II clusters, which implies that clustering is regulated and plays a role in the cell's ability to effect rapid response to external signals. Our results suggest that transient crowding of enzymes may aid in rate-limiting steps of gene regulation.

Journal ArticleDOI
TL;DR: This article takes a closer look at the concepts of 64 remarkable meta-heuristics, selected objectively for their outstanding performance on 15 classic MAVRP with different attributes, and leads to the identification of “winning strategies” in designing effective heuristics forMAVRP.

Journal ArticleDOI
TL;DR: The Self-Adaptive Goal Generation Robust Intelligent Adaptive Curiosity (SAGG-RIAC) architecture is introduced as an intrinsically motivated goal exploration mechanism which allows active learning of inverse models in high-dimensional redundant robots.

Journal ArticleDOI
TL;DR: An approach to morality is developed as an adaptation to an environment in which individuals were in competition to be chosen and recruited in mutually advantageous cooperative interactions, and the best strategy is to treat others with impartiality and to share the costs and benefits of cooperation equally.
Abstract: What makes humans moral beings? This question can be understood either as a proximate "how" question or as an ultimate "why" question. The "how" question is about the mental and social mechanisms that produce moral judgments and interactions, and has been investigated by psychologists and social scientists. The "why" question is about the fitness consequences that explain why humans have morality, and has been discussed by evolutionary biologists in the context of the evolution of cooperation. Our goal here is to contribute to a fruitful articulation of such proximate and ultimate explanations of human morality. We develop an approach to morality as an adaptation to an environment in which individuals were in competition to be chosen and recruited in mutually advantageous cooperative interactions. In this environment, the best strategy is to treat others with impartiality and to share the costs and benefits of cooperation equally. Those who offer less than others will be left out of cooperation; conversely, those who offer more will be exploited by their partners. In line with this mutualistic approach, the study of a range of economic games involving property rights, collective actions, mutual help and punishment shows that participants' distributions aim at sharing the costs and benefits of interactions in an impartial way. In particular, the distribution of resources is influenced by effort and talent, and the perception of each participant's rights on the resources to be distributed.

Book ChapterDOI
18 Aug 2013
TL;DR: A different construction that works over the integers instead of ideal lattices, similar to the DGHV fully homomorphic encryption scheme, and a different technique for proving the full randomization of encodings, using the classical leftover hash lemma over a quotient lattice.
Abstract: Extending bilinear elliptic curve pairings to multilinear maps is a long-standing open problem. The first plausible construction of such multilinear maps has recently been described by Garg, Gentry and Halevi, based on ideal lattices. In this paper we describe a different construction that works over the integers instead of ideal lattices, similar to the DGHV fully homomorphic encryption scheme. We also describe a different technique for proving the full randomization of encodings: instead of Gaussian linear sums, we apply the classical leftover hash lemma over a quotient lattice. We show that our construction is relatively practical: for reasonable security parameters a one-round 7-party Diffie-Hellman key exchange requires less than 40 seconds per party. Moreover, in contrast with previous work, multilinear analogues of useful, base group assumptions like DLIN appear to hold in our setting.