scispace - formally typeset
Search or ask a question

Showing papers by "CentraleSupélec published in 2013"


Journal ArticleDOI
TL;DR: The numerical results that have obtained for various properties, such as atomization energies, weak interactions, hydrogen-bond length optimizations, and dissociation energies, and vertical excitation energies, show an increased performance of PBE-1/3 with respect to the widely used PBE0.
Abstract: We analyze the performances of the parameter-free hybrid density functional PBE0-1/3 obtained combining the PBE generalized-gradient functional with a predefined amount of exact exchange of 1/3, as recently discussed by Cortona [J. Chem. Phys.136, 086101 (Year: 2012)10.1063/1.3690462]. The numerical results that we have obtained for various properties, such as atomization energies (G2-148 dataset), weak interactions (NCB31 dataset), hydrogen-bond length optimizations, and dissociation energies (HB10 dataset), and vertical excitation energies, show an increased performance of PBE0-1/3 with respect to the widely used PBE0. We therefore propose to use one third as the mixing coefficient for the PBE-based hybrid functional.

102 citations


Journal ArticleDOI
TL;DR: In this article, the authors check the claims that data from Google Trends contain enough data to predict future financial index returns and show that other keywords applied on suitable assets yield robustly profitable strategies, thereby confirming the intuition of Preis et al.
Abstract: We check the claims that data from Google Trends contain enough data to predict future financial index returns. We first discuss the many subtle (and less subtle) biases that may affect the back-test of a trading strategy, particularly when based on such data. Expectedly, the choice of keywords is crucial: by using an industry-grade back-testing system, we verify that random finance-related keywords do not to contain more exploitable predictive information than random keywords related to illnesses, classic cars and arcade games. We however show that other keywords applied on suitable assets yield robustly profitable strategies, thereby confirming the intuition of Preis et al. (2013)

33 citations


Proceedings ArticleDOI
01 Dec 2013
TL;DR: This work uses Shift-Invariant Sparse Coding (SISC) to learn mid-level elements that can translate during coding, which results in systematically better approximations than those attained using standard sparse coding.
Abstract: We present a method to identify and exploit structures that are shared across different object categories, by using sparse coding to learn a shared basis for the 'part' and 'root' templates of Deformable Part Models (DPMs).Our first contribution consists in using Shift-Invariant Sparse Coding (SISC) to learn mid-level elements that can translate during coding. This results in systematically better approximations than those attained using standard sparse coding. To emphasize that the learned mid-level structures are shiftable we call them shufflets.Our second contribution consists in using the resulting score to construct probabilistic upper bounds to the exact template scores, instead of taking them 'at face value' as is common in current works. We integrate shufflets in Dual- Tree Branch-and-Bound and cascade-DPMs and demonstrate that we can achieve a substantial acceleration, with practically no loss in performance.

21 citations


Posted Content
07 Oct 2013
TL;DR: This paper exploits random matrix theory to derive a deterministic expression for the asymptotic signal-to-interference-and-noise ratio (SINR) for each user based on channel statistics and provides an optimization algorithm to approximate the weights that maximize the network-wide weighted max-min fairness.
Abstract: Large-scale MIMO systems can yield a substantial improvement in spectral efficiency for future communication systems. Due to the finer spatial resolution achieved by a huge number of antennas at the base stations, these systems have shown to be robust to inter-user interference and the use of linear precoding is asymptotically optimal. However, from a practical point of view, most precoding schemes exhibit prohibitively high computational complexity as the system dimensions increase. For example, the near-optimal regularized zero forcing (RZF) precoding requires the inversion of a large matrix. This motivated our companion paper, where we proposed to solve the issue in singlecell multi-user systems by approximating the matrix inverse by a truncated polynomial expansion (TPE), where the polynomial coefficients are optimized to maximize the system performance. We have shown that the proposed TPE precoding with a small number of coefficients reaches almost the performance of RZF but never exceeds it. In a realistic multi-cell scenario involving large-scale multiuser MIMO systems, the optimization of RZF precoding has thus far not been feasible. This is mainly attributed to the high complexity of the scenario and the non-linear impact of the necessary regularizing parameters. On the other hand, the scalar weights in TPE precoding give hope for possible throughput optimization. Following the same methodology as in the companion paper, we exploit random matrix theory to derive a deterministic expression for the asymptotic signal-to-interference-and-noise ratio (SINR) for each user based on channel statistics. We also provide an optimization algorithm to approximate the weights that maximize the network-wide weighted max-min fairness. The optimization weights can be used to mimic the user throughput distribution of RZF precoding. Using simulations, we compare the network throughput of the proposed TPE precoding with that of the suboptimal RZF scheme and show that our scheme can achieve higher throughput using a TPE order of only 3.

17 citations


Journal Article
TL;DR: This study considers historical values of wind power for predicting future values taking into account both the variability in the input and the uncertainty in the model structure.
Abstract: Wind speed uncertainty, and the variability in the physical and operating characteristics of turbines have a significant impact on power system operations such as regulation, load following, balancing, unit commitment and scheduling. In this study, we consider historical values of wind power for predicting future values taking into account both the variability in the input and the uncertainty in the model structure. Uncertainty in the hourly wind power input is presented as intervals of within-hour variability. A Neural Network (NN) is trained on the interval-valued inputs to provide prediction intervals (PIs) in output. A multi-objective genetic algorithm (namely, non-dominated sorting genetic algorithm–II (NSGA-II)) is used to train the NN. A multi-objective framework is adopted to find PIs which are optimal for accuracy (coverage probability) and efficacy (width).

12 citations


Proceedings ArticleDOI
07 Jan 2013
TL;DR: In this paper, a 20-ns high-voltage pulse is applied across two pin-shaped electrodes at a frequency of 10 kHz, with an energy of 2 mJ per pulse.
Abstract: Nanosecond Repetitively Pulsed (NRP) discharges in atmospheric pressure water vapor at 450 K are studied with time-resolved optical emission spectroscopy (OES). A 20-ns highvoltage pulse is applied across two pin-shaped electrodes at a frequency of 10 kHz, with an energy of 2 mJ per pulse. Emission of OH(A-X) as well as atomic states of O and H are observed. The emission of these species increases during the 20-ns pulse, then decreases. Then, after about 150 ns, we observe again a strong increase of emission of these species. To determine the gas temperature, we add a small amount (1%) of molecular nitrogen to the ow of water vapor. The rotational temperature measured from N 2(C 3 u - B 2 g) second positive system of N2 is measured and compared with the rotational temperature measure with the OH(A-X) transition. The electron density is obtained by the Stark broadening of the H emission line at 486 nm. The electron number density increases to about 6 10 15 cm 3 during the pulse, then decays to 10 14 cm 3 after 150 ns. But then, a surprising behavior occurs: the Full-Width at Half-Maximum (FWHM) of the H emission line increases again sharply, with no electric eld applied, up to 5 nm, and then decays slowly to 1 nm over the next microsecond.

9 citations


Journal ArticleDOI
TL;DR: A new approach for image filtering in a Bayesian framework where the probability density function of the likelihood function is approximated using the concept of non-parametric or kernel estimation, based on the generalized Gaussian Markov random fields.
Abstract: We introduce a new approach for image filtering in a Bayesian framework. In this case the probability density function (pdf) of thelikelihood function is approximated using the concept of non-parametric or kernel estimation. The method is based on the generalizedGaussian Markov random fields (GGMRF), a class of Markov random fields which are used as prior information into the Bayesian rule, whichprincipal objective is to eliminate those effects caused by the excessive smoothness on the reconstruction process of images which arerich in contours or edges. Accordingly to the hypothesis made for the present work, it is assumed a limited knowledge of the noise pdf,so the idea is to use a non-parametric estimator to estimate such a pdf and then apply the entropy to construct the cost function for thelikelihood term. The previous idea leads to the construction of Maximum a posteriori (MAP) robust estimators, since the real systems arealways exposed to continuous perturbations of unknown nature. Some promising results of three new MAP entropy estimators (MAPEE) forimage filtering are presented, together with some concluding remarks.

7 citations


Posted Content
TL;DR: In this article, a Lyapunov-based homogeneous controller for the stabilization of a perturbed chain of integrators of arbitrary order is presented. But the advantages to control the homogeneity degree of the controller are also discussed.
Abstract: In this paper, we present a Lyapunov-based homogeneous controller for the stabilization of a perturbed chain of integrators of arbitrary order $r\geq 1$. The proposed controller is based on homogeneous controller for stabilization of pure integrator chains. The advantages to control the homogeneity degree of the controller are also discussed. A bounded-controller with minimum amplitude of discontinuous control and a controller with fixed-time convergence are synthesized, using control of homogeneity degree, and their performances are shown in simulations. It is demonstrated that the homogeneous arbitrary HOSM controller \cite{Levant2001} is a particular case of our controller.

6 citations


Book ChapterDOI
01 Dec 2013

5 citations


Posted Content
20 Jul 2013
TL;DR: In this paper, the authors provided a precise empirical study of the interdealer spot market and found an unusual shape for the average book and a bimodal spread distribution, and argued that the coexistence of manual traders and algorithmic traders who react differently to the new tick size leads to a strong price clustering property in all types of orders.
Abstract: Using a new high frequency data set we provide a precise empirical study of the interdealer spot market. We check that the main stylized facts of financial time series also apply to the FX market: fat-tailed distribution of returns, aggregational normality and volatility clustering. We report two standard microstructure phenomena: microstructure noise effects in the signature plot and the Epps effect. We find an unusual shape for the average book and a bimodal spread distribution. We construct the order flow and analyse its main characteristics: volume, placement, arrival intensity and sign. Many quantities have been dramatically affected by the decrease of the tick size in March 2011. We argue that the coexistence of manual traders and algorithmic traders, who react differently to the new tick size, leads to a strong price clustering property in all types of orders, thus affecting price formation.

5 citations


Proceedings ArticleDOI
29 Jul 2013
TL;DR: A computational formalism is presented that structures a C++ library which aims at the modelling, simulation and statistical analysis of stochastic non-linear discrete dynamical system models.
Abstract: A computational formalism is presented that structures a C++ library which aims at the modelling, simulation and statistical analysis of stochastic non-linear discrete dynamical system models. Applications concern the development and analysis of general plant growth models.

Journal ArticleDOI
14 Mar 2013
TL;DR: In this article, the photorefractive effect in lithium niobate (LN) crystals is investigated with the consideration of the electro-optic properties dependence with the defect structure of the doped crystals.
Abstract: The photorefractive effect in lithium niobate (LN) crystals is one the main drawback for its integration in optoelectronic devices using high light intensity. Doping congruent LN crystals by appropriate dopants like divalent ions such as Mg2+ are known, for specific concentrations, to improve their optical damage resistance. We present experimental measurements of the photorefractive damage in a series of magnesium doped congruent lithium niobate, performed with two experimental technique based on the measurement with time of the photoinduced distortion for the first one and based on the direct measurement of the photoinduced birefringence variation with time for the second. The dependences of the photorefractivity and of the photosensitivity on the power and dopant concentration have been investigated and discussed with the consideration of the electro-optic properties dependence with the defect structure of the doped crystals. We conclude that that doping above a second threshold concentration with divalent Mg ions leads to a significant decrease of the photorefraction with respect to pure congruent crystals. We conclude that doping LN congruent crystals with Mg strongly increase the photorefractive damage resistance and, in association to the interesting electro-optic coefficients, LN:Mg present an interesting alternative for modulating devices to the stoichiometric LN, which is difficult to growth in high quality and in large quantity.

Proceedings ArticleDOI
21 Aug 2013
TL;DR: This work chooses to take sparsity into account via a scale mixture prior, more precisely a student-t model to solve a linear inverse problem using various methods based on the Variational Bayesian Approximation (VBA).
Abstract: Our aim is to solve a linear inverse problem using various methods based on the Variational Bayesian Approximation (VBA). We choose to take sparsity into account via a scale mixture prior, more precisely a student-t model. The joint posterior of the unknown and hidden variable of the mixtures is approximated via the VBA. To do this approximation, classically the alternate algorithm is used. But this method is not the most efficient. Recently other optimization algorithms have been proposed; indeed classical iterative algorithms of optimization such as the steepest descent method and the conjugate gradient have been studied in the space of the probability densities involved in the Bayesian methodology to treat this problem. The main object of this work is to present these three algorithms and a numerical comparison of their performances.

01 Jan 2013
TL;DR: The objective of this paper is to propose a global safety management method based on well-known safety methods, in order to organize the different tasks to make the system safe.
Abstract: In System Engineering, one of the most critical process is the re- quirement management, particularly when it deals with the safety requirements. These one are non-functional requirements and are related to emergent proper- ties, which come from the integration of the different system components. They must be identified as soon as possible, because they are guards to validate or not the system, which can require changes in system architecture. Moreover, they are formulated at system level and need to be declined at sub-system level. The objective of this paper is to propose a global safety management method based on well-known safety methods, in order to organize the different tasks to make the system safe. The method focuses mainly on the definition of the system safety requirements following risk and hazard analysis, and also on their declina- tion according to a top-down approach. It is based on the famous Failure Mode, Effects, and Criticality Analysis (FMECA) and the use of Fault Trees and Event Trees.

Dissertation
17 Jan 2013
TL;DR: In this paper, an approche basee sur des Quadratures de Convolution (QC) is presented, namely methode hybride Laplace-Temps (L-T).
Abstract: Ce travail detaille une approche de calcul pour la resolution de problemes dynamiques qui combinent des discretisations en temps et dans le domaine de Laplace reposant sur une technique de sous-structuration. En particulier, la methode developpee cherche a remplir le besoin industriel de realiser des calculs dynamiques tridimensionnels pour le risque sismique en prenant en compte des effets non-lineaires d'interaction sol-structure (ISS). Deux sous-domaines sont consideres dans ce probleme. D'une part, le domaine de sol lineaire et non-borne qui est modelise par une impedance de bord discretisee dans le domaine de Laplace au moyen d'une methode d'elements de frontiere ; et, de l'autre part, la superstructure qui fait reference pas seulement a la structure et sa fondation mais aussi, eventuellement, a une partie du sol presentant un comportement non-lineaire. Ce dernier sous-domaine est formule dans le domaine temporel et discretise avec la methode des elements finis (FE). Dans ce cadre, les forces liees a l'ISS s'ecrivent sous la forme d'une integrale de convolution en temps dont le noyau est la transformee de Laplace inverse de la matrice d'impedance de sol. Pour pouvoir evaluer cette convolution dans le domaine temporel a partir d'une impedance de sol definie dans le domaine de Laplace, une approche basee sur des Quadratures de Convolution (QC) est presentee : la methode hybride Laplace-Temps (L-T). La stabilite numerique de son couplage avec un schema d'integration de type Newmark est ensuite etudiee sur plusieurs modeles d'ISS en dynamique lineaire et non-lineaire. Finalement, la methode L-T est testee sur un modele numerique plus complexe, proche d'une application sismique de caractere industriel, et des resultats satisfaisants sont obtenus par rapport aux solutions de reference.

Journal ArticleDOI
Ibrahim Baydoun1
TL;DR: In this article, the authors studied the Dirichlet-to-Neumann formalism for Laplacian transport in isotropic media and showed that Green-Ostrogradski theorem is adopted to this type of problem in three dimensional case.
Abstract: We study Laplacian transport by the Dirichlet-to-Neumann formalism in isotropic media (γ = I). Our main results concern the solution of the localisation inverse problem of absorbing domains and its relative Dirichlet-to-Neumann operator . In this paper, we define explicitly operator , and we show that Green-Ostrogradski theorem is adopted to this type of problem in three dimensional case.

29 Sep 2013
TL;DR: In this article, the authors considered a microgrid composed of a middle-size train station with integrated photovoltaic power production, a small urban-sized wind power plant and a residential district, and optimized its energy management in presence of uncertainties in the environment and mechanical failures.
Abstract: Microgrids of electricity distribution can "smartly" improve local reliability and power quality, while moderating local greenhouse gas emissions and costs of power supply by the exploitation of renewable sources and storage. In this paper, we consider a microgrid composed of a middle-size train station with integrated photovoltaic power production, a small urban-sized wind power plant and a residential district, and optimize its energy management in presence of uncertainties in the environment and mechanical failures. We use Agent-Based Modeling (ABM) and Robust Optimization (RO), and evaluate system performance in terms of typical reliability (adequacy) indicators for energy systems such as Loss of Load Expectation (LOLE) and Loss of Expected Energy (LOEE).

01 Jan 2013
TL;DR: In this paper, the authors considered the problem of local stability of bilinear systems with aperiodic sampled-data linear state feedback control, and showed that the feasibility of some linear matrix inequalities implies the local asymptotic stability of the sampled data system in an ellipsoidal region containing the equilibrium.
Abstract: This note considers the problem of local stability of bilinear systems with aperiodic sampled-data linear state feedback control. The sampling intervals are time-varying and upper bounded. It is shown that the feasibility of some linear matrix inequalities (LMIs), implies the local asymptotic stability of the sampled-data system in an ellipsoidal region containing the equilibrium. The method is based on the analyzis of contractive invariant sets, and it is inspired by the dissipativity theory. The results are illustrated by means of numerical examples.

Journal ArticleDOI
Ibrahim Baydoun1
TL;DR: In this article, a conformal mapping technique is adopted to solve the inverse problem in the two-dimensional case and sufficient Dirichelet-to-Neumann conditions are found to ensure that this inverse problem is uniquely soluble.
Abstract: We study the localisation inverse problem corresponding to Laplacian transport of absorbing cell. Our main goal is to find sufficient Dirichelet-to-Neumann conditions insuring that this inverse problem is uniquely soluble. In this paper, we show that the conformal mapping technique is adopted to this type of problem in the two dimensional case.

Journal ArticleDOI
TL;DR: In this article, the authors study the conditions of industrial investments in low carbon technologies over the next 30 years and find that these conditions can be either favorable or not to renewable energies and, on the other hand, the nuclear technologies, according to three main dynamically quantified drivers: 1. technical change, i.e. relative evolutions of efficiency and costs of available technologies (gas, coal, wind...); 2. incentive framework given by European energy policies (nuclear, climate); 3. structure of electricity markets (level of centralization).
Abstract: In a context of carbon emissions reduction, this article aims at widening the scope of the OECD Nuclear Energy Agency (NEA) report entitled « The interaction of Nuclear Energy and Renewable: System Effects in Low Carbon Electricity System » (2012) to the European electric supply by studying the conditions of industrial investments in low carbon technologies over the next 30 years. These conditions can be either favorable or not to, on the one hand, the renewable energies and, on the other hand, the nuclear technologies, according to 3 main dynamically quantified drivers: 1. "Technical change", i.e. relative evolutions of efficiency and costs of available technologies (gas, coal, wind...); 2. "Policy", i.e. incentive framework given by European energy policies (nuclear, climate...); 3. "Economic", i.e. structure of electricity markets (level of centralization...). A total of 24 scenarios are developed using an imaginative approach, i.e. assuming different possibilities for the future change in 3 main drivers. Finally we have found: 2 scenarios of them prove to be the most favorable to renewable energies; 2 scenarios favorable to both renewable and nuclear, for the interaction of nuclear and renewable in the electricity system is not necessarily favorable to nuclear investment. These scenarios are then discussed in view of the quantitative drivers mentioned above.

Journal ArticleDOI
TL;DR: An alternative method to deal with digital image restoration into a Bayesian framework is introduced, particularly, the use of a new half-quadratic function is proposed which performance is satisfactory compared with respect to some other functions in existing literature.
Abstract: The present work introduces an alternative method to deal with digital image restoration into a Bayesian framework, particularly, the use of a new half-quadratic function is proposed which performance is satisfactory compared with respect to some other functions in existing literature. The bayesian methodology is based on the prior knowledge of some information that allows an efficient modelling of the image acquisition process. The edge preservation of objects into the image while smoothing noise is necessary in an adequate model. Thus, we use a convexity criteria given by a semi-Huber function to obtain adequate weighting of the cost functions (half-quadratic) to be minimized. The principal objective when using Bayesian methods based on the Markov Random Fields (MRF) in the context of image processing is to eliminate those effects caused by the excessive smoothness on the reconstruction process of image which are rich in contours or edges. A comparison between the new introduced scheme and other three existing schemes, for the cases of noise filtering and image deblurring, is presented. This collection of implemented methods is inspired of course on the use of MRFs such as the semi-Huber, the generalized Gaussian, the Welch, and Tukey potential functions with granularity control. The obtained results showed a satisfactory performance and the effectiveness of the proposed estimator with respect to other three estimators.

01 Jan 2013
TL;DR: Un approche conceptuelle d’aide a la decision dans the conception of systemes complexes sur le formalisme de l’analyse de concepts formels par similarite (ACFS) pour la classification, the visualisation and l”exploration de donnees de simulation afin d”aider les concepteurs de systemes complex a identifier les choix de conception les plus pertinents.
Abstract: Resume. Dans cet article nous presentons une approche conceptuelle d’aide a la decision dans la conception de systemes complexes. Cette approche s’appuie sur le formalisme de l’analyse de concepts formels par similarite (ACFS) pour la classification, la visualisation et l’exploration de donnees de simulation afin d’aider les concepteurs de systemes complexes a identifier les choix de conception les plus pertinents. L’approche est illustree sur un cas test de conception de cabine d’un avion de ligne fourni par les partenaires industriels et qui consiste a etudier les donnees de simulation de differentes configurations du systeme de ventilation de la cabine afin d’identifier celles qui assurent un confort convenable pour les passagers la cabine. La classification des donnees de simulation avec leurs scores de confort en utilisant l’ACFS permet d’identifier pour chaque parametre de conception simule la plage de valeurs possibles qui assure un confort convenable pour les passagers. Les resultats obtenus ont ete confirmes et valides par de nouvelles simulations.

01 Jan 2013
TL;DR: A metamodel for specifying redundable software and hardware architectures that takes into account the constraints on the number of redundant elements, the allowed failures, the execution times and allocation constraints is presented.
Abstract: In this paper, we present a metamodel for specifying redundable software and hardware architectures This metamodel takes into account the constraints on the number of redundant elements, the number of allowed failures, the execution times and allocation constraints From such a specification, we generate all possible structural configurations Then, we check that each of these configurations can be scheduled This has been implemented as a tool chain relying on Alloy, SynDEx, and model transformations in Eclipse/EMF This work allows system architects to explore different hardware and software architectures to implement different redundancy policies It has been applied on a simple case study from the Ariane V launcher

Journal ArticleDOI
TL;DR: This contribution proposes to derive and analyze the Angular Resolution Limit (ARL) for the scenario of mixed Near-Field and Far-Field Sources and derives in closed-form the analytic ARL under or not the assumption of low noise variance.
Abstract: Passive source localization is a well known inverse problem in which we convert the observed measurements into information about the direction of arrivals. In this paper we focus on the optimal resolution of such problem. More precisely, we propose in this contribution to derive and analyze the Angular Resolution Limit (ARL) for the scenario of mixed Near-Field (NF) and Far-Field (FF) Sources. This scenario is relevant to some realistic situations. We base our analysis on the Smith's equation which involves the Cramer-Rao Bound (CRB). This equation provides the theoretical ARL which is independent of a specific estimator. Our methodology is the following: first, we derive a closed-form expression of the CRB for the considered problem. Using these expressions, we can rewrite the Smith's equation as a 4-th order polynomial by assuming a small separation of the sources. Finally, we derive in closed-form the analytic ARL under or not the assumption of low noise variance. The obtained expression is compact and can provide useful qualitative informations on the behavior of the ARL.