scispace - formally typeset
Search or ask a question

Showing papers by "École Normale Supérieure published in 2011"


Journal ArticleDOI
TL;DR: Modules for Experiments in Stellar Astrophysics (MESA) as mentioned in this paper is a suite of open source, robust, efficient, thread-safe libraries for a wide range of applications in computational stellar astrophysics.
Abstract: Stellar physics and evolution calculations enable a broad range of research in astrophysics. Modules for Experiments in Stellar Astrophysics (MESA) is a suite of open source, robust, efficient, thread-safe libraries for a wide range of applications in computational stellar astrophysics. A one-dimensional stellar evolution module, MESAstar, combines many of the numerical and physics modules for simulations of a wide range of stellar evolution scenarios ranging from very low mass to massive stars, including advanced evolutionary phases. MESAstar solves the fully coupled structure and composition equations simultaneously. It uses adaptive mesh refinement and sophisticated timestep controls, and supports shared memory parallelism based on OpenMP. State-of-the-art modules provide equation of state, opacity, nuclear reaction rates, element diffusion data, and atmosphere boundary conditions. Each module is constructed as a separate Fortran 95 library with its own explicitly defined public interface to facilitate independent development. Several detailed examples indicate the extensive verification and testing that is continuously performed and demonstrate the wide range of capabilities that MESA possesses. These examples include evolutionary tracks of very low mass stars, brown dwarfs, and gas giant planets to very old ages; the complete evolutionary track of a 1 M ☉ star from the pre-main sequence (PMS) to a cooling white dwarf; the solar sound speed profile; the evolution of intermediate-mass stars through the He-core burning phase and thermal pulses on the He-shell burning asymptotic giant branch phase; the interior structure of slowly pulsating B Stars and Beta Cepheids; the complete evolutionary tracks of massive stars from the PMS to the onset of core collapse; mass transfer from stars undergoing Roche lobe overflow; and the evolution of helium accretion onto a neutron star. MESA can be downloaded from the project Web site (http://mesa.sourceforge.net/).

3,474 citations


Journal ArticleDOI
TL;DR: The proposed model is pessimistic (a lower bound on coverage) whereas the grid model is optimistic, and that both are about equally accurate, and the proposed model may better capture the increasingly opportunistic and dense placement of base stations in future networks.
Abstract: Cellular networks are usually modeled by placing the base stations on a grid, with mobile users either randomly scattered or placed deterministically. These models have been used extensively but suffer from being both highly idealized and not very tractable, so complex system-level simulations are used to evaluate coverage/outage probability and rate. More tractable models have long been desirable. We develop new general models for the multi-cell signal-to-interference-plus-noise ratio (SINR) using stochastic geometry. Under very general assumptions, the resulting expressions for the downlink SINR CCDF (equivalent to the coverage probability) involve quickly computable integrals, and in some practical special cases can be simplified to common integrals (e.g., the Q-function) or even to simple closed-form expressions. We also derive the mean rate, and then the coverage gain (and mean rate loss) from static frequency reuse. We compare our coverage predictions to the grid model and an actual base station deployment, and observe that the proposed model is pessimistic (a lower bound on coverage) whereas the grid model is optimistic, and that both are about equally accurate. In addition to being more tractable, the proposed model may better capture the increasingly opportunistic and dense placement of base stations in future networks.

3,309 citations


Journal ArticleDOI
18 Nov 2011-Science
TL;DR: In this paper, the authors designed epoxy networks that can rearrange their topology by exchange reactions without depolymerization, and showed that they are insoluble and processable.
Abstract: Permanently cross-linked materials have outstanding mechanical properties and solvent resistance, but they cannot be processed and reshaped once synthesized Non–cross-linked polymers and those with reversible cross-links are processable, but they are soluble We designed epoxy networks that can rearrange their topology by exchange reactions without depolymerization and showed that they are insoluble and processable Unlike organic compounds and polymers whose viscosity varies abruptly near the glass transition, these networks show Arrhenius-like gradual viscosity variations like those of vitreous silica Like silica, the materials can be wrought and welded to make complex objects by local heating without the use of molds The concept of a glass made by reversible topology freezing in epoxy networks can be readily scaled up for applications and generalized to other chemistries

1,901 citations


Journal ArticleDOI
TL;DR: In this article, the physical principles at the basis of this artificial magnetism are presented, and the analysis is generalized to the simulation of non-Abelian gauge potentials and some striking consequences are presented.
Abstract: When a neutral atom moves in a properly designed laser field, its center-of-mass motion may mimic the dynamics of a charged particle in a magnetic field, with the emergence of a Lorentz-like force. In this Colloquium the physical principles at the basis of this artificial (synthetic) magnetism are presented. The corresponding Aharonov-Bohm phase is related to the Berry's phase that emerges when the atom adiabatically follows one of the dressed states of the atom-laser interaction. Some manifestations of artificial magnetism for a cold quantum gas, in particular, in terms of vortex nucleation are discussed. The analysis is then generalized to the simulation of non-Abelian gauge potentials and some striking consequences are presented, such as the emergence of an effective spin-orbit coupling. Both the cases of bulk gases and discrete systems, where atoms are trapped in an optical lattice, are addressed.

1,531 citations


Journal ArticleDOI
TL;DR: In this paper, analytical methods and fractionation techniques of dietary fibres are evaluated for improving physical and structural properties of hydration, oil holding capacity, viscosity, texture, sensory characteristics, and shelf-life.

1,263 citations



Journal ArticleDOI
TL;DR: In this article, the most general scalar field theories with second-order field equations are derived from linear combinations of Lagrangians made by multiplying a particular form of the Galileon Lagrangian by an arbitrary scalar function of the field and its first derivatives.
Abstract: We determine the most general scalar field theories which have an action that depends on derivatives of order two or less, and have equations of motion that stay second order and lower on flat space-time. We show that those theories can all be obtained from linear combinations of Lagrangians made by multiplying a particular form of the Galileon Lagrangian by an arbitrary scalar function of the scalar field and its first derivatives. We also obtain curved space-time extensions of those theories which have second-order field equations for both the metric and the scalar field. This provides the most general extension, under the condition that field equations stay second order, of k-essence, Galileons, k-Mouflage as well as of the kinetically braided scalars. It also gives the most general action for a possible scalar classicalizer with second-order field equations. We discuss the relation between our construction and the Euler hierarchies of Fairlie et al. showing, in particular, that Euler hierarchies allow one to obtain the most general theory when the latter is shift symmetric. As a simple application of our formalism, we give the covariantized version of the conformal Galileon.

1,157 citations


Journal ArticleDOI
TL;DR: This paper shows that reformulating that step as a constrained flow optimization results in a convex problem and takes advantage of its particular structure to solve it using the k-shortest paths algorithm, which is very fast.
Abstract: Multi-object tracking can be achieved by detecting objects in individual frames and then linking detections across frames. Such an approach can be made very robust to the occasional detection failure: If an object is not detected in a frame but is in previous and following ones, a correct trajectory will nevertheless be produced. By contrast, a false-positive detection in a few frames will be ignored. However, when dealing with a multiple target problem, the linking step results in a difficult optimization problem in the space of all possible families of trajectories. This is usually dealt with by sampling or greedy search based on variants of Dynamic Programming which can easily miss the global optimum. In this paper, we show that reformulating that step as a constrained flow optimization results in a convex problem. We take advantage of its particular structure to solve it using the k-shortest paths algorithm, which is very fast. This new approach is far simpler formally and algorithmically than existing techniques and lets us demonstrate excellent performance in two very different contexts.

1,076 citations


Journal ArticleDOI
TL;DR: The ATLAS3D project as discussed by the authors is a multi-wavelength survey combined with a theoretical modelling effort, which provides multicolour imaging, two-dimensional kinematics of the atomic (H i), molecular (CO) and ionized gas (H beta, [O iii] and [N i]), together with the kinematic and population of the stars (H β, Fe5015 and Mg b), for a carefully selected, volume-limited (1.16 x 105 Mpc3) sample of 260 early-type (elliptical E
Abstract: The ATLAS3D project is a multiwavelength survey combined with a theoretical modelling effort. The observations span from the radio to the millimetre and optical, and provide multicolour imaging, two-dimensional kinematics of the atomic (H i), molecular (CO) and ionized gas (H beta, [O iii] and [N i]), together with the kinematics and population of the stars (H beta, Fe5015 and Mg b), for a carefully selected, volume-limited (1.16 x 105 Mpc3) sample of 260 early-type (elliptical E and lenticular S0) galaxies (ETGs). The models include semi-analytic, N-body binary mergers and cosmological simulations of galaxy formation. Here we present the science goals for the project and introduce the galaxy sample and the selection criteria. The sample consists of nearby (D 15 degrees) morphologically selected ETGs extracted from a parent sample of 871 galaxies (8 per cent E, 22 per cent S0 and 70 per cent spirals) brighter than M-K <-21.5 mag (stellar mass M-star greater than or similar to 6 x109 M-circle dot). We analyse possible selection biases and we conclude that the parent sample is essentially complete and statistically representative of the nearby galaxy population. We present the size-luminosity relation for the spirals and ETGs and show that the ETGs in the ATLAS3D sample define a tight red sequence in a colour-magnitude diagram, with few objects in the transition from the blue cloud. We describe the strategy of the SAURON integral field observations and the extraction of the stellar kinematics with the ppxf method. We find typical 1 Sigma errors of delta V approximate to 6 km s-1, delta Sigma approximate to 7 km s-1, delta h(3) approximate to delta h(4) approximate to 0.03 in the mean velocity, the velocity dispersion and Gauss-Hermite (GH) moments for galaxies with effective dispersion Sigma(e) greater than or similar to 120 km s-1. For galaxies with lower Sigma(e) (approximate to 40 per cent of the sample) the GH moments are gradually penalized by ppxf towards zero to suppress the noise produced by the spectral undersampling and only V and Sigma can be measured. We give an overview of the characteristics of the other main data sets already available for our sample and of the ongoing modelling projects.

954 citations


Journal ArticleDOI
TL;DR: In this article, the authors carried out a detailed environmental evaluation of geopolymer concrete production using the Life Cycle Assessment methodology and found that the production of most standard types of OPC concrete has a slightly lower impact on global warming than standard Ordinary Portland Cement.

868 citations


Book
07 Oct 2011
TL;DR: This book provides an introduction to random matrix theory and shows how it can be used to tackle a variety of problems in wireless communications, including performance analysis of CDMA, MIMO and multi-cell networks, as well as signal detection and estimation in cognitive radio networks.
Abstract: Blending theoretical results with practical applications, this book provides an introduction to random matrix theory and shows how it can be used to tackle a variety of problems in wireless communications The Stieltjes transform method, free probability theory, combinatoric approaches, deterministic equivalents and spectral analysis methods for statistical inference are all covered from a unique engineering perspective Detailed mathematical derivations are presented throughout, with thorough explanation of the key results and all fundamental lemmas required for the reader to derive similar calculus on their own These core theoretical concepts are then applied to a wide range of real-world problems in signal processing and wireless communications, including performance analysis of CDMA, MIMO and multi-cell networks, as well as signal detection and estimation in cognitive radio networks The rigorous yet intuitive style helps demonstrate to students and researchers alike how to choose the correct approach for obtaining mathematically accurate results

Journal ArticleDOI
14 Apr 2011-Neuron
TL;DR: It is found that most sleep slow waves and the underlying active and inactive neuronal states occur locally, especially in late sleep, and that slow waves can propagate, usually from medial prefrontal cortex to the medial temporal lobe and hippocampus.

Book
23 Dec 2011
TL;DR: This monograph covers proximal methods, block-coordinate descent, reweighted l2-penalized techniques, working-set and homotopy methods, as well as non-convex formulations and extensions, and provides an extensive set of experiments to compare various algorithms from a computational point of view.
Abstract: Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. They were first dedicated to linear variable selection but numerous extensions have now emerged such as structured sparsity or kernel selection. It turns out that many of the related estimation problems can be cast as convex optimization problems by regularizing the empirical risk with appropriate nonsmooth norms. The goal of this monograph is to present from a general perspective optimization tools and techniques dedicated to such sparsity-inducing penalties. We cover proximal methods, block-coordinate descent, reweighted l2-penalized techniques, working-set and homotopy methods, as well as non-convex formulations and extensions, and provide an extensive set of experiments to compare various algorithms from a computational point of view.

Journal ArticleDOI
TL;DR: The fundamental mechanism that explains why “convolutional-like” or “spatially coupled” codes perform so well is described, and it is conjecture that for a large range of graphical systems a similar saturation of the “dynamical” threshold occurs once individual components are coupled sufficiently strongly.
Abstract: Convolutional low-density parity-check (LDPC) ensembles, introduced by Felstrom and Zigangirov, have excellent thresholds and these thresholds are rapidly increasing functions of the average degree. Several variations on the basic theme have been proposed to date, all of which share the good performance characteristics of convolutional LDPC ensembles. We describe the fundamental mechanism that explains why “convolutional-like” or “spatially coupled” codes perform so well. In essence, the spatial coupling of individual codes increases the belief-propagation (BP) threshold of the new ensemble to its maximum possible value, namely the maximum a posteriori (MAP) threshold of the underlying ensemble. For this reason, we call this phenomenon “threshold saturation.” This gives an entirely new way of approaching capacity. One significant advantage of this construction is that one can create capacity-approaching ensembles with an error correcting radius that is increasing in the blocklength. Although we prove the “threshold saturation” only for a specific ensemble and for the binary erasure channel (BEC), empirically the phenomenon occurs for a wide class of ensembles and channels. More generally, we conjecture that for a large range of graphical systems a similar saturation of the “dynamical” threshold occurs once individual components are coupled sufficiently strongly. This might give rise to improved algorithms and new techniques for analysis.

Proceedings Article
12 Dec 2011
TL;DR: This work provides a non-asymptotic analysis of the convergence of two well-known algorithms, stochastic gradient descent as well as a simple modification where iterates are averaged, suggesting that a learning rate proportional to the inverse of the number of iterations, while leading to the optimal convergence rate, is not robust to the lack of strong convexity or the setting of the proportionality constant.
Abstract: We consider the minimization of a convex objective function defined on a Hilbert space, which is only available through unbiased estimates of its gradients. This problem includes standard machine learning algorithms such as kernel logistic regression and least-squares regression, and is commonly referred to as a stochastic approximation problem in the operations research community. We provide a non-asymptotic analysis of the convergence of two well-known algorithms, stochastic gradient descent (a.k.a. Robbins-Monro algorithm) as well as a simple modification where iterates are averaged (a.k.a. Polyak-Ruppert averaging). Our analysis suggests that a learning rate proportional to the inverse of the number of iterations, while leading to the optimal convergence rate in the strongly convex case, is not robust to the lack of strong convexity or the setting of the proportionality constant. This situation is remedied when using slower decays together with averaging, robustly leading to the optimal rate of convergence. We illustrate our theoretical results with simulations on synthetic and standard datasets.

Journal ArticleDOI
TL;DR: This paper quantifies the potential of the emerging compressed sensing (CS) signal acquisition/compression paradigm for low-complexity energy-efficient ECG compression on the state-of-the-art Shimmer WBSN mote and shows that CS represents a competitive alternative to state- of- the-art digital wavelet transform (DWT)-basedECG compression solutions in the context of WBSn-based ECG monitoring systems.
Abstract: Wireless body sensor networks (WBSN) hold the promise to be a key enabling information and communications technology for next-generation patient-centric telecardiology or mobile cardiology solutions. Through enabling continuous remote cardiac monitoring, they have the potential to achieve improved personalization and quality of care, increased ability of prevention and early diagnosis, and enhanced patient autonomy, mobility, and safety. However, state-of-the-art WBSN-enabled ECG monitors still fall short of the required functionality, miniaturization, and energy efficiency. Among others, energy efficiency can be improved through embedded ECG compression, in order to reduce airtime over energy-hungry wireless links. In this paper, we quantify the potential of the emerging compressed sensing (CS) signal acquisition/compression paradigm for low-complexity energy-efficient ECG compression on the state-of-the-art Shimmer WBSN mote. Interestingly, our results show that CS represents a competitive alternative to state-of-the-art digital wavelet transform (DWT)-based ECG compression solutions in the context of WBSN-based ECG monitoring systems. More specifically, while expectedly exhibiting inferior compression performance than its DWT-based counterpart for a given reconstructed signal quality, its substantially lower complexity and CPU execution time enables it to ultimately outperform DWT-based ECG compression in terms of overall energy efficiency. CS-based ECG compression is accordingly shown to achieve a 37.1% extension in node lifetime relative to its DWT-based counterpart for “good” reconstruction quality.

Journal ArticleDOI
TL;DR: A learning method is proposed, which is called Stable Estimator of Dynamical Systems (SEDS), to learn the parameters of the DS to ensure that all motions closely follow the demonstrations while ultimately reaching and stopping at the target.
Abstract: This paper presents a method to learn discrete robot motions from a set of demonstrations. We model a motion as a nonlinear autonomous (i.e., time-invariant) dynamical system (DS) and define sufficient conditions to ensure global asymptotic stability at the target. We propose a learning method, which is called Stable Estimator of Dynamical Systems (SEDS), to learn the parameters of the DS to ensure that all motions closely follow the demonstrations while ultimately reaching and stopping at the target. Time-invariance and global asymptotic stability at the target ensures that the system can respond immediately and appropriately to perturbations that are encountered during the motion. The method is evaluated through a set of robot experiments and on a library of human handwriting motions.

Book ChapterDOI
04 Dec 2011
TL;DR: An efficient simulation algorithm is proposed to model the behaviour of BKZ in high dimension with high blocksize ≥50, which can predict approximately both the output quality and the running time, thereby revising lattice security estimates.
Abstract: The best lattice reduction algorithm known in practice for high dimension is Schnorr-Euchner's BKZ: all security estimates of lattice cryptosystems are based on NTL's old implementation of BKZ. However, recent progress on lattice enumeration suggests that BKZ and its NTL implementation are no longer optimal, but the precise impact on security estimates was unclear. We assess this impact thanks to extensive experiments with BKZ 2.0, the first state-of-the-art implementation of BKZ incorporating recent improvements, such as Gama-Nguyen-Regev pruning. We propose an efficient simulation algorithm to model the behaviour of BKZ in high dimension with high blocksize ≥50, which can predict approximately both the output quality and the running time, thereby revising lattice security estimates. For instance, our simulation suggests that the smallest NTRUSign parameter set, which was claimed to provide at least 93-bit security against key-recovery lattice attacks, actually offers at most 65-bit security.

Journal ArticleDOI
TL;DR: In this article, the authors presented a first estimate of the exposure of the world's large port cities (population exceeding one million inhabitants in 2005) to coastal flooding due to sea-level rise and storm surge now and in the 2070s, taking into account scenarios of socioeconomic and climate changes.
Abstract: This paper presents a first estimate of the exposure of the world’s large port cities (population exceeding one million inhabitants in 2005) to coastal flooding due to sea-level rise and storm surge now and in the 2070s, taking into account scenarios of socio-economic and climate changes. The analysis suggests that about 40 million people (0.6% of the global population or roughly 1 in 10 of the total port city population in the cities considered) are currently exposed to a 1 in 100 year coastal flood event. For assets, the total value exposed in 2005 across all cities considered is estimated to be US$3,000 billion; corresponding to around 5% of global GDP in 2005 (both measured in international USD) with USA, Japan and the Netherlands being the countries with the highest values. By the 2070s, total population exposed could grow more than threefold due to the combined effects of sea-level rise, subsidence, population growth and urbanisation with asset exposure increasing to more than ten times current levels or approximately 9% of projected global GDP in this period. On the global-scale, population growth, socio-economic growth and urbanization are the most important drivers of the overall increase in exposure particularly in developing countries, as low-lying areas are urbanized. Climate change and subsidence can significantly exacerbate this increase in exposure. Exposure is concentrated in a few cities: collectively Asia dominates population exposure now and in the future and also dominates asset exposure by the 2070s. Importantly, even if the environmental or socio-economic changes were smaller than assumed here the underlying trends would remain. This research shows the high potential benefits from risk-reduction planning and policies at the city scale to address the issues raised by the possible growth in exposure.

Journal ArticleDOI
TL;DR: In this article, a review of phase-change materials (PCMs) behavior prediction is presented, based on the first law and on the second law of thermodynamics, with selected results for several configurations, from numerous authors.
Abstract: Energy storage components improve the energy efficiency of systems by reducing the mismatch between supply and demand. For this purpose, phase-change materials are particularly attractive since they provide a high-energy storage density at a constant temperature which corresponds to the phase transition temperature of the material. Nevertheless, the incorporation of phase-change materials (PCMs) in a particular application calls for an analysis that will enable the researcher to optimize performances of systems. Due to the non-linear nature of the problem, numerical analysis is generally required to obtain appropriate solutions for the thermal behavior of systems. Therefore, a large amount of research has been carried out on PCMs behavior predictions. The review will present models based on the first law and on the second law of thermodynamics. It shows selected results for several configurations, from numerous authors so as to enable one to start his/her research with an exhaustive overview of the subject. This overview stresses the need to match experimental investigations with recent numerical analyses since in recent years, models mostly rely on other models in their validation stages.

Journal ArticleDOI
TL;DR: A staging system that divides the disease into four stages is aimed at assisting the clinician in determining the appropriate therapy, forecasting prognosis, and implementing follow-up steps required for the management of the leishmaniosis patient.
Abstract: The LeishVet group has formed recommendations designed primarily to help the veterinary clinician in the management of canine leishmaniosis. The complexity of this zoonotic infection and the wide range of its clinical manifestations, from inapparent infection to severe disease, make the management of canine leishmaniosis challenging. The recommendations were constructed by combining a comprehensive review of evidence-based studies, extensive clinical experience and critical consensus opinion discussions. The guidelines presented here in a short version with graphical topic displays suggest standardized and rational approaches to the diagnosis, treatment, follow-up, control and prevention of canine leishmaniosis. A staging system that divides the disease into four stages is aimed at assisting the clinician in determining the appropriate therapy, forecasting prognosis, and implementing follow-up steps required for the management of the leishmaniosis patient.

Journal ArticleDOI
TL;DR: A set of building blocks for constructing descriptors which can be combined together and jointly optimized so as to minimize the error of a nearest-neighbor classifier are described.
Abstract: In this paper, we explore methods for learning local image descriptors from training data. We describe a set of building blocks for constructing descriptors which can be combined together and jointly optimized so as to minimize the error of a nearest-neighbor classifier. We consider both linear and nonlinear transforms with dimensionality reduction, and make use of discriminant learning techniques such as Linear Discriminant Analysis (LDA) and Powell minimization to solve for the parameters. Using these techniques, we obtain descriptors that exceed state-of-the-art performance with low dimensionality. In addition to new experiments and recommendations for descriptor learning, we are also making available a new and realistic ground truth data set based on multiview stereo data.

Proceedings Article
12 Dec 2011
TL;DR: This work shows that both the basic proximal-gradient method and the accelerated proximal - gradient method achieve the same convergence rate as in the error-free case, provided that the errors decrease at appropriate rates.
Abstract: We consider the problem of optimizing the sum of a smooth convex function and a non-smooth convex function using proximal-gradient methods, where an error is present in the calculation of the gradient of the smooth term or in the proximity operator with respect to the non-smooth term. We show that both the basic proximal-gradient method and the accelerated proximal-gradient method achieve the same convergence rate as in the error-free case, provided that the errors decrease at appropriate rates. Using these rates, we perform as well as or better than a carefully chosen fixed error level on a set of structured sparsity problems.

Journal ArticleDOI
TL;DR: It is shown in this article how a large system analysis based on random matrix theory (RMT) can provide tight and tractable approximations of key performance measures of SCNs.
Abstract: The exponentially increasing demand for wireless data services requires a massive network densification that is neither economically nor ecologically viable with the current cellular system architectures. A promising solution to this problem is the concept of small-cell networks (SCNs), which is founded by the idea of a very dense deployment of self-organizing, low-cost, low-power, base stations (BSs). Although SCNs have the potential to significantly increase the capacity of cellular networks while reducing their energy consumption, they pose many new challenges to the optimal system design. We show in this article how a large system analysis based on random matrix theory (RMT) can provide tight and tractable approximations of key performance measures of SCNs.

Journal ArticleDOI
TL;DR: It is shown that melting in hard disks proceeds in two steps with a liquid phase, a hexatic phase, and a solid, and the hexatic-solid transition is continuous while, surprisingly, the liquid-hexatic transition is of first order.
Abstract: Melting in two spatial dimensions, as realized in thin films or at interfaces, represents one of the most fascinating phase transitions in nature, but it remains poorly understood. Even for the fundamental hard-disk model, the melting mechanism has not been agreed upon after 50 years of studies. A recent Monte Carlo algorithm allows us to thermalize systems large enough to access the thermodynamic regime. We show that melting in hard disks proceeds in two steps with a liquid phase, a hexatic phase, and a solid. The hexatic-solid transition is continuous while, surprisingly, the liquid-hexatic transition is of first order. This melting scenario solves one of the fundamental statistical-physics models, which is at the root of a large body of theoretical, computational, and experimental research.

Journal ArticleDOI
TL;DR: The aim of this paper is to review the state-of-the-art of Field Programmable Gate Array (FPGA) technologies and their contribution to industrial control applications and two short case studies of Neural Network control systems designs targeting FPGAs are presented.
Abstract: The aim of this paper is to review the state-of-the-art of Field Programmable Gate Array (FPGA) technologies and their contribution to industrial control applications. Authors start by addressing various research fields which can exploit the advantages of FPGAs. The features of these devices are then presented, followed by their corresponding design tools. To illustrate the benefits of using FPGAs in the case of complex control applications, a sensorless motor controller has been treated. This controller is based on the Extended Kalman Filter. Its development has been made according to a dedicated design methodology, which is also discussed. The use of FPGAs to implement artificial intelligence-based industrial controllers is then briefly reviewed. The final section presents two short case studies of Neural Network control systems designs targeting FPGAs.

Book ChapterDOI
14 Aug 2011
TL;DR: It is shown that fully homomorphic encryption can be implemented using simple arithmetic operations, and some optimizations from the recent Gentry-Halevi implementation of Gentry's scheme are obtained, roughly the same level of efficiency.
Abstract: At Eurocrypt 2010 van Dijk et al. described a fully homomorphic encryption scheme over the integers. The main appeal of this scheme (compared to Gentry's) is its conceptual simplicity. This simplicity comes at the expense of a public key size in O(λ10) which is too large for any practical system. In this paper we reduce the public key size to O(λ7) by encrypting with a quadratic form in the public key elements, instead of a linear form. We prove that the scheme remains semantically secure, based on a stronger variant of the approximate-GCD problem, already considered by van Dijk et al. We alsodescribe the first implementation of the resulting fully homomorphic scheme. Borrowing some optimizations from the recent Gentry-Halevi implementation of Gentry's scheme, we obtain roughly the same level of efficiency. This shows that fully homomorphic encryption can be implemented using simple arithmetic operations.

Journal ArticleDOI
TL;DR: An experimental study on the ageing of insulated-gate bipolar transistor (IGBT) power modules shows that ageing mechanisms mainly concern wire bonds and emitter metallization, with gradual impact depending on protocol severity.
Abstract: This paper presents an experimental study on the ageing of insulated-gate bipolar transistor (IGBT) power modules. The aim is to identify the effects of power cycling on these devices with high baseplate temperatures (60 °C to 90 °C) and wide temperature swings (60 °C to 100 °C). These values for thermal stresses have been defined according to automotive applications. The test conditions are provided by two types of test benches that will be described in this paper. The changes in electrical and thermal indicators are observed regularly by a monitoring system. At the end of the test (reaching damage criterion or failure), different analyses are performed (acoustic scanning and SEM imaging), and the damage is listed systematically. Nineteen samples of 600-V 200-A IGBT modules were thus aged using five different power-cycling protocols. The final summary of results shows that ageing mechanisms mainly concern wire bonds and emitter metallization, with gradual impact depending on protocol severity.

Journal ArticleDOI
TL;DR: A set of macroinstructions, named Reference Value Advisor, is created for use in Microsoft Excel to calculate reference limits applying different methods, including currently unavailable methods, and should be useful in veterinary clinical pathology when only small reference sample groups are available.
Abstract: International recommendations for determination of reference intervals have been recently updated, especially for small reference sample groups, and use of the robust method and Box-Cox transformation is now recommended. Unfortunately, these methods are not included in most software programs used for data analysis by clinical laboratories. We have created a set of macroinstructions, named Reference Value Advisor, for use in Microsoft Excel to calculate reference limits applying different methods. For any series of data, Reference Value Advisor calculates reference limits (with 90% confidence intervals (CI)) using a nonparametric method when nZ40 and by parametric and robust methods from native and Box-Cox transformed values; tests normality of distributions using the Ander- son-Darling test and outliers using Tukey and Dixon-Reed tests; displays the distribution of values in dot plots and histograms and constructs Q-Q plots for visual inspection of normality; and provides minimal guidelines in the form of comments based on international recommendations. The crit- ical steps in determination of reference intervals are correct selection of as many reference individuals as possible and analysis of specimens in con- trolled preanalytical and analytical conditions. Computing tools cannot compensate for flaws in selection and size of the reference sample group and handling and analysis of samples. However, if those steps are per- formed properly, Reference Value Advisor, available as freeware at http:// www.biostat.envt.fr/spip/spip.php?article63, permits rapid assessment and comparison of results calculated using different methods, including cur- rently unavailable methods. This allows for selection of the most appro- priate method, especially as the program provides the CI of limits. It should be useful in veterinary clinical pathology when only small reference sample groups are available.

Journal ArticleDOI
TL;DR: The denoising process is expressed as a linear expansion of thresholds (LET) that is optimized by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE) derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate).
Abstract: We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.