scispace - formally typeset
Search or ask a question

Showing papers by "École Polytechnique published in 2015"


Proceedings Article
01 Jan 2015
TL;DR: This paper extends the idea of a student network that could imitate the soft output of a larger teacher network or ensemble of networks, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student.
Abstract: While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network.

2,560 citations


Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, Ovsat Abdinov4  +5117 moreInstitutions (314)
TL;DR: A measurement of the Higgs boson mass is presented based on the combined data samples of the ATLAS and CMS experiments at the CERN LHC in the H→γγ and H→ZZ→4ℓ decay channels.
Abstract: A measurement of the Higgs boson mass is presented based on the combined data samples of the ATLAS and CMS experiments at the CERN LHC in the H→γγ and H→ZZ→4l decay channels. The results are obtained from a simultaneous fit to the reconstructed invariant mass peaks in the two channels and for the two experiments. The measured masses from the individual channels and the two experiments are found to be consistent among themselves. The combined measured mass of the Higgs boson is mH=125.09±0.21 (stat)±0.11 (syst) GeV.

1,567 citations


Journal ArticleDOI
Markus Ackermann, Andrea Albert1, Brandon Anderson2, W. B. Atwood3, Luca Baldini1, Guido Barbiellini4, Denis Bastieri4, Keith Bechtol5, Ronaldo Bellazzini4, Elisabetta Bissaldi4, Roger Blandford1, E. D. Bloom1, R. Bonino4, Eugenio Bottacini1, T. J. Brandt6, Johan Bregeon7, P. Bruel8, R. Buehler, G. A. Caliandro1, R. A. Cameron1, R. Caputo3, M. Caragiulo4, P. A. Caraveo9, C. Cecchi4, Eric Charles1, A. Chekhtman10, James Chiang1, G. Chiaro11, Stefano Ciprini4, R. Claus1, Johann Cohen-Tanugi7, Jan Conrad2, Alessandro Cuoco4, S. Cutini4, Filippo D'Ammando9, A. De Angelis4, F. de Palma4, R. Desiante4, Seth Digel1, L. Di Venere12, Persis S. Drell1, Alex Drlica-Wagner13, R. Essig14, C. Favuzzi4, S. J. Fegan8, Elizabeth C. Ferrara6, W. B. Focke1, A. Franckowiak1, Yasushi Fukazawa15, Stefan Funk, P. Fusco4, F. Gargano4, Dario Gasparrini4, Nicola Giglietto4, Francesco Giordano4, Marcello Giroletti9, T. Glanzman1, G. Godfrey1, G. A. Gomez-Vargas4, I. A. Grenier16, Sylvain Guiriec6, M. Gustafsson17, E. Hays6, John W. Hewitt18, D. Horan8, T. Jogler1, Gudlaugur Johannesson19, M. Kuss4, Stefan Larsson2, Luca Latronico4, Jingcheng Li20, L. Li2, M. Llena Garde2, Francesco Longo4, F. Loparco4, P. Lubrano4, D. Malyshev1, M. Mayer, M. N. Mazziotta4, Julie McEnery6, Manuel Meyer2, Peter F. Michelson1, Tsunefumi Mizuno15, A. A. Moiseev21, M. E. Monzani1, A. Morselli4, S. Murgia22, E. Nuss7, T. Ohsugi15, M. Orienti9, E. Orlando1, J. F. Ormes23, David Paneque1, J. S. Perkins6, Melissa Pesce-Rollins1, F. Piron7, G. Pivato4, T. A. Porter1, S. Rainò4, R. Rando4, M. Razzano4, A. Reimer1, Olaf Reimer1, Steven Ritz3, Miguel A. Sánchez-Conde2, André Schulz, Neelima Sehgal24, Carmelo Sgrò4, E. J. Siskind, F. Spada4, Gloria Spandre4, P. Spinelli4, Louis E. Strigari25, Hiroyasu Tajima1, Hiromitsu Takahashi15, J. B. Thayer1, L. Tibaldo1, Diego F. Torres20, Eleonora Troja6, Giacomo Vianello1, Michael David Werner, Brian L Winer26, K. S. Wood27, Matthew Wood1, Gabrijela Zaharijas4, Stephan Zimmer2 
TL;DR: In this article, the authors report on γ-ray observations of the Milky-Way satellite galaxies (dSphs) based on six years of Fermi Large Area Telescope data processed with the new Pass8 event-level analysis.
Abstract: The dwarf spheroidal satellite galaxies (dSphs) of the Milky Way are some of the most dark matter (DM) dominated objects known. We report on γ-ray observations of Milky Way dSphs based on six years of Fermi Large Area Telescope data processed with the new Pass8 event-level analysis. None of the dSphs are significantly detected in γ rays, and we present upper limits on the DM annihilation cross section from a combined analysis of 15 dSphs. These constraints are among the strongest and most robust to date and lie below the canonical thermal relic cross section for DM of mass ≲100 GeV annihilating via quark and τ-lepton channels.

1,166 citations


Journal ArticleDOI
Markus Ackermann, Marco Ajello1, Andrea Albert2, W. B. Atwood3  +174 moreInstitutions (43)
TL;DR: The first IGRB measurement with the Large Area Telescope (LAT) on board the Fermi Gamma-ray Space Telescope (Fermi) used 10 months of sky-survey data and considered an energy range between 200 MeV and 100 GeV.
Abstract: The gamma-ray sky can be decomposed into individually detected sources, diffuse emission attributed to the interactions of Galactic cosmic rays with gas and radiation fields, and a residual all-sky emission component commonly called the isotropic diffuse gamma-ray background (IGRB). The IGRB comprises all extragalactic emissions too faint or too diffuse to be resolved in a given survey, as well as any residual Galactic foregrounds that are approximately isotropic. The first IGRB measurement with the Large Area Telescope (LAT) on board the Fermi Gamma-ray Space Telescope (Fermi) used 10 months of sky-survey data and considered an energy range between 200 MeV and 100 GeV. Improvements in event selection and characterization of cosmic-ray backgrounds, better understanding of the diffuse Galactic emission, and a longer data accumulation of 50 months, allow for a refinement and extension of the IGRB measurement with the LAT, now covering the energy range from 100 MeV to 820 GeV. The IGRB spectrum shows a significant high-energy cutoff feature, and can be well described over nearly four decades in energy by a power law with exponential cutoff having a spectral index of 2.32 plus or minus 0.02 and a break energy of (279 plus or minus 52) GeV using our baseline diffuse Galactic emission model. The total intensity attributed to the IGRB is (7.2 plus or minus 0.6) x 10(exp -6) cm(exp -2) s(exp -1) sr(exp -1) above 100 MeV, with an additional +15%/-30% systematic uncertainty due to the Galactic diffuse foregrounds.

680 citations


Journal ArticleDOI
Vardan Khachatryan1, Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam  +2134 moreInstitutions (142)
TL;DR: The couplings of the Higgs boson are probed for deviations in magnitude from the standard model predictions in multiple ways, including searches for invisible and undetected decays, and no significant deviations are found.
Abstract: Properties of the Higgs boson with mass near 125 GeV are measured in proton-proton collisions with the CMS experiment at the LHC. Comprehensive sets of production and decay measurements are combined. The decay channels include gamma gamma, ZZ, WW, tau tau, bb, and mu mu pairs. The data samples were collected in 2011 and 2012 and correspond to integrated luminosities of up to 5.1 inverse femtobarns at 7 TeV and up to 19.7 inverse femtobarns at 8 TeV. From the high-resolution gamma gamma and ZZ channels, the mass of the Higgs boson is determined to be 125.02 +0.26 -0.27 (stat) +0.14 -0.15 (syst) GeV. For this mass value, the event yields obtained in the different analyses tagging specific decay channels and production mechanisms are consistent with those expected for the standard model Higgs boson. The combined best-fit signal relative to the standard model expectation is 1.00 +/- 0.09 (stat) +0.08 -0.07 (theo) +/- 0.07 (syst) at the measured mass. The couplings of the Higgs boson are probed for deviations in magnitude from the standard model predictions in multiple ways, including searches for invisible and undetected decays. No significant deviations are found.

677 citations


Journal ArticleDOI
Markus Ackermann, Marco Ajello1, W. B. Atwood2, Luca Baldini3  +180 moreInstitutions (41)
TL;DR: The third catalog of active galactic nuclei (AGNs) detected by the Fermi-LAT (3LAC) is presented in this paper, which is based on the 3FGL of sources detected between 100 MeV and 300 GeV.
Abstract: The third catalog of active galactic nuclei (AGNs) detected by the Fermi-LAT (3LAC) is presented. It is based on the third Fermi-LAT catalog (3FGL) of sources detected between 100 MeV and 300 GeV w ...

668 citations


Journal ArticleDOI
TL;DR: In this communication, state-of-the-art quantum control techniques are reviewed and put into perspective by a consortium of experts in optimal control theory and applications to spectroscopy, imaging, as well as quantum dynamics of closed and open systems.
Abstract: It is control that turns scientific knowledge into useful technology: in physics and engineering it provides a systematic way for driving a dynamical system from a given initial state into a desired target state with minimized expenditure of energy and resources As one of the cornerstones for enabling quantum technologies, optimal quantum control keeps evolving and expanding into areas as diverse as quantum-enhanced sensing, manipulation of single spins, photons, or atoms, optical spectroscopy, photochemistry, magnetic resonance (spectroscopy as well as medical imaging), quantum information processing and quantum simulation In this communication, state-of-the-art quantum control techniques are reviewed and put into perspective by a consortium of experts in optimal control theory and applications to spectroscopy, imaging, as well as quantum dynamics of closed and open systems We address key challenges and sketch a roadmap for future developments

572 citations


Journal ArticleDOI
Halina Abramowicz1, Halina Abramowicz2, I. Abt3, Leszek Adamczyk4  +325 moreInstitutions (55)
TL;DR: A combination of all inclusive deep inelastic cross sections previously published by the H1 and ZEUS collaborations at HERA for neutral and charged current scattering for zero beam polarisation is presented in this paper.
Abstract: A combination is presented of all inclusive deep inelastic cross sections previously published by the H1 and ZEUS collaborations at HERA for neutral and charged current $e^{\pm}p$ scattering for zero beam polarisation. The data were taken at proton beam energies of 920, 820, 575 and 460 GeV and an electron beam energy of 27.5 GeV. The data correspond to an integrated luminosity of about 1 fb$^{-1}$ and span six orders of magnitude in negative four-momentum-transfer squared, $Q^2$, and Bjorken $x$. The correlations of the systematic uncertainties were evaluated and taken into account for the combination. The combined cross sections were input to QCD analyses at leading order, next-to-leading order and at next-to-next-to-leading order, providing a new set of parton distribution functions, called HERAPDF2.0. In addition to the experimental uncertainties, model and parameterisation uncertainties were assessed for these parton distribution functions. Variants of HERAPDF2.0 with an alternative gluon parameterisation, HERAPDF2.0AG, and using fixed-flavour-number schemes, HERAPDF2.0FF, are presented. The analysis was extended by including HERA data on charm and jet production, resulting in the variant HERAPDF2.0Jets. The inclusion of jet-production cross sections made a simultaneous determination of these parton distributions and the strong coupling constant possible, resulting in $\alpha_s(M_Z)=0.1183 \pm 0.0009 {\rm(exp)} \pm 0.0005{\rm (model/parameterisation)} \pm 0.0012{\rm (hadronisation)} ^{+0.0037}_{-0.0030}{\rm (scale)}$. An extraction of $xF_3^{\gamma Z}$ and results on electroweak unification and scaling violations are also presented.

514 citations


Journal ArticleDOI
Vardan Khachatryan1, Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam2  +2802 moreInstitutions (215)
04 Jun 2015-Nature
TL;DR: In this paper, the branching fractions of the B meson (B-s(0)) and the B-0 meson decaying into two oppositely charged muons (mu(+) and mu(-)) were observed.
Abstract: The standard model of particle physics describes the fundamental particles and their interactions via the strong, electromagnetic and weak forces. It provides precise predictions for measurable quantities that can be tested experimentally. The probabilities, or branching fractions, of the strange B meson (B-s(0)) and the B-0 meson decaying into two oppositely charged muons (mu(+) and mu(-)) are especially interesting because of their sensitivity to theories that extend the standard model. The standard model predicts that the B-s(0)->mu(+)mu(-) and B-0 ->mu(+)mu(-) decays are very rare, with about four of the former occurring for every billion B-s(0) mesons produced, and one of the latter occurring for every ten billion B-0 mesons(1). A difference in the observed branching fractions with respect to the predictions of the standard model would provide a direction in which the standard model should be extended. Before the Large Hadron Collider (LHC) at CERN2 started operating, no evidence for either decay mode had been found. Upper limits on the branching fractions were an order of magnitude above the standard model predictions. The CMS (Compact Muon Solenoid) and LHCb(Large Hadron Collider beauty) collaborations have performed a joint analysis of the data from proton-proton collisions that they collected in 2011 at a centre-of-mass energy of seven teraelectronvolts and in 2012 at eight teraelectronvolts. Here we report the first observation of the B-s(0)->mu(+)mu(-) decay, with a statistical significance exceeding six standard deviations, and the best measurement so far of its branching fraction. Furthermore, we obtained evidence for the B-0 ->mu(+)mu(-) decay with a statistical significance of three standard deviations. Both measurements are statistically compatible with standard model predictions and allow stringent constraints to be placed on theories beyond the standard model. The LHC experiments will resume taking data in 2015, recording proton-proton collisions at a centre-of-mass energy of 13 teraelectronvolts, which will approximately double the production rates of B-s(0) and B-0 mesons and lead to further improvements in the precision of these crucial tests of the standard model.

467 citations


Journal ArticleDOI
Vardan Khachatryan1, Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam2  +2119 moreInstitutions (141)
29 May 2015
TL;DR: In this paper, a search for particle dark matter (DM), extra dimensions, and unparticles using events containing a jet and an imbalance in transverse momentum was conducted at the LHC.
Abstract: Results are presented from a search for particle dark matter (DM), extra dimensions, and unparticles using events containing a jet and an imbalance in transverse momentum. The data were collected by the CMS detector in proton-proton collisions at the LHC and correspond to an integrated luminosity of 19.7 fb$^{-1}$ at a centre-of-mass energy of 8 TeV. The number of observed events is found to be consistent with the standard model prediction. Limits are placed on the DM-nucleon scattering cross section as a function of the DM particle mass for spin-dependent and spin-independent interactions. Limits are also placed on the scale parameter $M_\mathrm{D}$ in the ADD model of large extra dimensions, and on the unparticle model parameter $\Lambda_\mathrm{U}$. The constraints on ADD models and unparticles are the most stringent limits in this channel and those on the DM-nucleon scattering cross section are an improvement over previous collider results.

425 citations


Journal ArticleDOI
TL;DR: In this article, the spin-parity and tensor structure of the interactions of the recently discovered Higgs boson is performed using the H to ZZ, Z gamma*, gamma* gamma* to 4 l, H to WW to l nu l nu, and H to gamma gamma decay modes.
Abstract: The study of the spin-parity and tensor structure of the interactions of the recently discovered Higgs boson is performed using the H to ZZ, Z gamma*, gamma* gamma* to 4 l, H to WW to l nu l nu, and H to gamma gamma decay modes. The full dataset recorded by the CMS experiment during the LHC Run 1 is used, corresponding to an integrated luminosity of up to 5.1 inverse femtobarns at a center-of-mass energy of 7 TeV and up to 19.7 inverse femtobarns at 8 TeV. A wide range of spin-two models is excluded at a 99% confidence level or higher, or at a 99.87% confidence level for the minimal gravity-like couplings, regardless of whether assumptions are made on the production mechanism. Any mixed-parity spin-one state is excluded in the ZZ and WW modes at a greater than 99.999% confidence level. Under the hypothesis that the resonance is a spin-zero boson, the tensor structure of the interactions of the Higgs boson with two vector bosons ZZ, Z gamma, gamma gamma, and WW is investigated and limits on eleven anomalous contributions are set. Tighter constraints on anomalous HVV interactions are obtained by combining the HZZ and HWW measurements. All observations are consistent with the expectations for the standard model Higgs boson with the quantum numbers J[PC]=0[++].

Journal ArticleDOI
TL;DR: In this paper, a phenomenological fracture initiation model for metals is developed for predicting ductile fracture in industrial practice based on the assumption that the onset of fracture is imminent with the formation of a primary or secondary band of localization.

Journal ArticleDOI
TL;DR: It is shown here that with a small modification, one can ensure the same upper bound for the decay of the energy, as well as the convergence of the iterates to a minimizer of the “Fast Iterative Shrinkage/Thresholding Algorithm.
Abstract: We discuss here the convergence of the iterates of the "Fast Iterative Shrinkage/Thresholding Algorithm," which is an algorithm proposed by Beck and Teboulle for minimizing the sum of two convex, lower-semicontinuous, and proper functions (defined in a Euclidean or Hilbert space), such that one is differentiable with Lipschitz gradient, and the proximity operator of the second is easy to compute. It builds a sequence of iterates for which the objective is controlled, up to a (nearly optimal) constant, by the inverse of the square of the iteration number. However, the convergence of the iterates themselves is not known. We show here that with a small modification, we can ensure the same upper bound for the decay of the energy, as well as the convergence of the iterates to a minimizer.

Journal ArticleDOI
TL;DR: P prospects for research on islands are highlighted to improve understanding of the ecology and evolution of communities in general and how attributes of islands combine to provide unusual research opportunities, the implications of which stretch far beyond islands.
Abstract: The study of islands as model systems has played an important role in the development of evolutionary and ecological theory. The 50th anniversary of MacArthur and Wilson's (December 1963) article, ‘An equilibrium theory of insular zoogeography’, was a recent milestone for this theme. Since 1963, island systems have provided new insights into the formation of ecological communities. Here, building on such developments, we highlight prospects for research on islands to improve our understanding of the ecology and evolution of communities in general. Throughout, we emphasise how attributes of islands combine to provide unusual research opportunities, the implications of which stretch far beyond islands. Molecular tools and increasing data acquisition now permit re‐assessment of some fundamental issues that interested MacArthur and Wilson. These include the formation of ecological networks, species abundance distributions, and the contribution of evolution to community assembly. We also extend our prospects to other fields of ecology and evolution – understanding ecosystem functioning, speciation and diversification – frequently employing assets of oceanic islands in inferring the geographic area within which evolution has occurred, and potential barriers to gene flow. Although island‐based theory is continually being enriched, incorporating non‐equilibrium dynamics is identified as a major challenge for the future.

Journal ArticleDOI
20 Jul 2015
TL;DR: An overview of the recent academic literature devoted to the applications of Hawkes processes in finance can be found in this article, where the authors review their main empirical applications to address many different problems in high-frequency finance.
Abstract: In this paper we propose an overview of the recent academic literature devoted to the applications of Hawkes processes in finance. Hawkes processes constitute a particular class of multivariate point processes that has become very popular in empirical high-frequency finance this last decade. After a reminder of the main definitions and properties that characterize Hawkes processes, we review their main empirical applications to address many different problems in high-frequency finance. Because of their great flexibility and versatility, we show that they have been successfully involved in issues as diverse as estimating the volatility at the level of transaction data, estimating the market stability, accounting for systemic risk contagion, devising optimal execution strategies or capturing the dynamics of the full order book.

Journal ArticleDOI
TL;DR: Systematic gaps in compliance with data protection principles in accredited health apps question whether certification programs relying substantially on developer disclosures can provide a trusted resource for patients and clinicians.
Abstract: Poor information privacy practices have been identified in health apps. Medical app accreditation programs offer a mechanism for assuring the quality of apps; however, little is known about their ability to control information privacy risks. We aimed to assess the extent to which already-certified apps complied with data protection principles mandated by the largest national accreditation program. Cross-sectional, systematic, 6-month assessment of 79 apps certified as clinically safe and trustworthy by the UK NHS Health Apps Library. Protocol-based testing was used to characterize personal information collection, local-device storage and information transmission. Observed information handling practices were compared against privacy policy commitments. The study revealed that 89 % (n = 70/79) of apps transmitted information to online services. No app encrypted personal information stored locally. Furthermore, 66 % (23/35) of apps sending identifying information over the Internet did not use encryption and 20 % (7/35) did not have a privacy policy. Overall, 67 % (53/79) of apps had some form of privacy policy. No app collected or transmitted information that a policy explicitly stated it would not; however, 78 % (38/49) of information-transmitting apps with a policy did not describe the nature of personal information included in transmissions. Four apps sent both identifying and health information without encryption. Although the study was not designed to examine data handling after transmission to online services, security problems appeared to place users at risk of data theft in two cases. Systematic gaps in compliance with data protection principles in accredited health apps question whether certification programs relying substantially on developer disclosures can provide a trusted resource for patients and clinicians. Accreditation programs should, as a minimum, provide consistent and reliable warnings about possible threats and, ideally, require publishers to rectify vulnerabilities before apps are released.

Journal ArticleDOI
TL;DR: The TRIQS toolbox for Research on Interacting Quantum Systems as mentioned in this paper is an open-source, computational physics library providing a framework for the quick development of applications in the field of many-body quantum physics, and in particular, strongly correlated electronic systems.

Journal ArticleDOI
K. Abe1, K. Abe2, Hiroaki Aihara1, Hiroaki Aihara2  +278 moreInstitutions (57)
TL;DR: In this article, the physics potential of a long baseline neutrino experiment using the Hyper-Kamiokande detector and neutrinos from the J-PARC proton synchrotron is presented.
Abstract: Hyper-Kamiokande will be a next generation underground water Cherenkov detector with a total (fiducial) mass of 0.99 (0.56) million metric tons, approximately 20 (25) times larger than that of Super-Kamiokande. One of the main goals of Hyper-Kamiokande is the study of $CP$ asymmetry in the lepton sector using accelerator neutrino and anti-neutrino beams. In this paper, the physics potential of a long baseline neutrino experiment using the Hyper-Kamiokande detector and a neutrino beam from the J-PARC proton synchrotron is presented. The analysis uses the framework and systematic uncertainties derived from the ongoing T2K experiment. With a total exposure of 7.5 MW $\times$ 10$^7$ sec integrated proton beam power (corresponding to $1.56\times10^{22}$ protons on target with a 30 GeV proton beam) to a $2.5$-degree off-axis neutrino beam, it is expected that the leptonic $CP$ phase $\delta_{CP}$ can be determined to better than 19 degrees for all possible values of $\delta_{CP}$, and $CP$ violation can be established with a statistical significance of more than $3\,\sigma$ ($5\,\sigma$) for $76\%$ ($58\%$) of the $\delta_{CP}$ parameter space. Using both $ u_e$ appearance and $ u_\mu$ disappearance data, the expected 1$\sigma$ uncertainty of $\sin^2\theta_{23}$ is 0.015(0.006) for $\sin^2\theta_{23}=0.5(0.45)$.

Journal ArticleDOI
TL;DR: It is shown that the contact time of drops bouncing on a repellent macrotexture takes discrete values when varying the impact speed, which allows for a quantitative analysis of the reduction of contact time and thus to understand how and why Macrotextures can control the dynamical properties of bouncing drops.
Abstract: It has been recently shown that the presence of macrotextures on superhydrophobic materials can markedly modify the dynamics of water impacting them, and in particular significantly reduce the contact time of bouncing drops, compared with what is observed on a flat surface. This finding constitutes a significant step in the maximization of water repellency, since it enables to minimize even further the contact between solid and liquid. It also opens a new axis of research on the design of super-structures to induce specific functions such as anti-freezing, liquid fragmentation and/or recomposition, guiding, trapping and so on. Here we show that the contact time of drops bouncing on a repellent macrotexture takes discrete values when varying the impact speed. This allows us to propose a quantitative analysis of the reduction of contact time and thus to understand how and why macrotextures can control the dynamical properties of bouncing drops.

Journal ArticleDOI
TL;DR: These updated versions of the constrained and unconstrained testing environment and its accompanying SIF decoder feature dynamic memory allocation, a modern thread-safe Fortran modular design, a new Matlab interface and a revised installation procedure integrated with GALAHAD.
Abstract: We describe the most recent evolution of our constrained and unconstrained testing environment and its accompanying SIF decoder. Code-named SIFDecode and CUTEst, these updated versions feature dynamic memory allocation, a modern thread-safe Fortran modular design, a new Matlab interface and a revised installation procedure integrated with GALAHAD.

Journal ArticleDOI
TL;DR: The first direct search for lepton-flavour-violating decays of the recently discovered Higgs boson (H) is described in this paper, where the search is performed in the H→μτ_e and H→mτ_h channels, where τeτe and τ_h are tau leptons reconstructed in the electronic and hadronic decay channels, respectively.

Journal ArticleDOI
TL;DR: In this paper, the performance of the Cern LHC detector for photon reconstruction and identification in proton-proton collisions at a centre-of-mass energy of 8 TeV at the CERN LHC is described.
Abstract: A description is provided of the performance of the CMS detector for photon reconstruction and identification in proton-proton collisions at a centre-of-mass energy of 8 TeV at the CERN LHC. Details are given on the reconstruction of photons from energy deposits in the electromagnetic calorimeter (ECAL) and the extraction of photon energy estimates. The reconstruction of electron tracks from photons that convert to electrons in the CMS tracker is also described, as is the optimization of the photon energy reconstruction and its accurate modelling in simulation, in the analysis of the Higgs boson decay into two photons. In the barrel section of the ECAL, an energy resolution of about 1% is achieved for unconverted or late-converting photons from H→γγ decays. Different photon identification methods are discussed and their corresponding selection efficiencies in data are compared with those found in simulated events.

Journal ArticleDOI
TL;DR: Transverse momentum dependent (TMD) parton distribution functions, their application to topical issues in high-energy physics phenomenology, and their theoretical connections with QCD resummation, evolution and factorization theorems are discussed in this paper.
Abstract: We review transverse momentum dependent (TMD) parton distribution functions, their application to topical issues in high-energy physics phenomenology, and their theoretical connections with QCD resummation, evolution and factorization theorems. We illustrate the use of TMDs via examples of multi-scale problems in hadronic collisions. These include transverse momentum q(T) spectra of Higgs and vector bosons for low q(T), and azimuthal correlations in the production of multiple jets associated with heavy bosons at large jet masses. We discuss computational tools for TMDs, and present the application of a new tool, TMDLIB, to parton density fits and parameterizations.

Journal ArticleDOI
TL;DR: The second-order azimuthal anisotropy Fourier Fourier harmonics, v2, obtained in p-Pb and PbPb collisions over a wide pseudorapidity range based on correlations among six or more charged particles support the interpretation of a collective origin for the previously observed long-range (large Δη) correlations in both systems.
Abstract: The second-order azimuthal anisotropy Fourier harmonics, v2, are obtained in pPb and PbPb collisions over a wide pseudorapidity (eta) range based on correlations among six or more charged particles. The pPb data, corresponding to an integrated luminosity of 35 inverse nanobarns, were collected during the 2013 LHC pPb run at a nucleon-nucleon center-of-mass energy of 5.02 TeV by the CMS experiment. A sample of semi-peripheral PbPb collision data at sqrt(s[NN])= 2.76 TeV, corresponding to an integrated luminosity of 2.5 inverse microbarns and covering a similar range of particle multiplicities as the pPb data, is also analyzed for comparison. The six- and eight-particle cumulant and the Lee-Yang zeros methods are used to extract the v2 coefficients, extending previous studies of two- and four-particle correlations. For both the pPb and PbPb systems, the v2 values obtained with correlations among more than four particles are consistent with previously published four-particle results. These data support the interpretation of a collective origin for the previously observed long-range (large Delta[eta]) correlations in both systems. The ratios of v2 values corresponding to correlations including different numbers of particles are compared to theoretical predictions that assume a hydrodynamic behavior of a pPb system dominated by fluctuations in the positions of participant nucleons. These results provide new insights into the multi-particle dynamics of collision systems with a very small overlapping region.

Journal ArticleDOI
Vardan Khachatryan1, Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam  +2353 moreInstitutions (181)
TL;DR: In this paper, a search for a heavy Higgs boson in the H to WW and H to ZZ decay channels is reported, based upon proton-proton collision data samples corresponding to an integrated luminosity of up to 5.1 inverse femtobarns at sqrt(s)=7 TeV and up to 19.7 inverse femto-bars at square root of 8 TeV, recorded by the CMS experiment at the CERN LHC.
Abstract: A search for a heavy Higgs boson in the H to WW and H to ZZ decay channels is reported. The search is based upon proton-proton collision data samples corresponding to an integrated luminosity of up to 5.1 inverse femtobarns at sqrt(s)=7 TeV and up to 19.7 inverse femtobarns at sqrt(s)=8 TeV, recorded by the CMS experiment at the CERN LHC. Several final states of the H to WW and H to ZZ decays are analyzed. The combined upper limit at the 95% confidence level on the product of the cross section and branching fraction exclude a Higgs boson with standard model-like couplings and decays in the range 145 < m[H] < 1000 GeV. We also interpret the results in the context of an electroweak singlet extension of the standard model.

Proceedings ArticleDOI
17 Dec 2015
TL;DR: A real-time, direct monocular SLAM method for omnidirectional or wide field-of-view fisheye cameras, providing a fast yet accurate approach to incremental stereo directly on distorted images and evaluating the framework on real-world sequences taken with a 185° fISheye lens.
Abstract: We propose a real-time, direct monocular SLAM method for omnidirectional or wide field-of-view fisheye cameras. Both tracking (direct image alignment) and mapping (pixel-wise distance filtering) are directly formulated for the unified omnidirectional model, which can model central imaging devices with a field of view above 180 °. This is in contrast to existing direct mono-SLAM approaches like DTAM or LSD-SLAM, which operate on rectified images, in practice limiting the field of view to around 130 ° diagonally. Not only does this allows to observe - and reconstruct - a larger portion of the surrounding environment, but it also makes the system more robust to degenerate (rotation-only) movement. The two main contribution are (1) the formulation of direct image alignment for the unified omnidirectional model, and (2) a fast yet accurate approach to incremental stereo directly on distorted images. We evaluated our framework on real-world sequences taken with a 185 ° fisheye lens, and compare it to a rectified and a piecewise rectified approach.

Journal ArticleDOI
TL;DR: In this article, an array of resonating structures (herein termed a "metastructure") buried around sensitive buildings is proposed to control the propagation of seismic waves. But the authors focus on the infrasound regime (1-10 Hz), a range of frequencies relevant for the protection of large buildings.

Journal ArticleDOI
TL;DR: In this article, the authors describe the pre-operational analysis and forecasting system developed during MACC (Monitoring Atmospheric Composition and Climate) and continued in the MACC-II (summer 2014) European projects to provide air quality services for the European continent.
Abstract: . This paper describes the pre-operational analysis and forecasting system developed during MACC (Monitoring Atmospheric Composition and Climate) and continued in the MACC-II (Monitoring Atmospheric Composition and Climate: Interim Implementation) European projects to provide air quality services for the European continent. This system is based on seven state-of-the art models developed and run in Europe (CHIMERE, EMEP, EURAD-IM, LOTOS-EUROS, MATCH, MOCAGE and SILAM). These models are used to calculate multi-model ensemble products. The paper gives an overall picture of its status at the end of MACC-II (summer 2014) and analyses the performance of the multi-model ensemble. The MACC-II system provides daily 96 h forecasts with hourly outputs of 10 chemical species/aerosols (O3, NO2, SO2, CO, PM10, PM2.5, NO, NH3, total NMVOCs (non-methane volatile organic compounds) and PAN+PAN precursors) over eight vertical levels from the surface to 5 km height. The hourly analysis at the surface is done a posteriori for the past day using a selection of representative air quality data from European monitoring stations. The performance of the system is assessed daily, weekly and every 3 months (seasonally) through statistical indicators calculated using the available representative air quality data from European monitoring stations. Results for a case study show the ability of the ensemble median to forecast regional ozone pollution events. The seasonal performances of the individual models and of the multi-model ensemble have been monitored since September 2009 for ozone, NO2 and PM10. The statistical indicators for ozone in summer 2014 show that the ensemble median gives on average the best performances compared to the seven models. There is very little degradation of the scores with the forecast day but there is a marked diurnal cycle, similarly to the individual models, that can be related partly to the prescribed diurnal variations of anthropogenic emissions in the models. During summer 2014, the diurnal ozone maximum is underestimated by the ensemble median by about 4 μg m−3 on average. Locally, during the studied ozone episodes, the maxima from the ensemble median are often lower than observations by 30–50 μg m−3. Overall, ozone scores are generally good with average values for the normalised indicators of 0.14 for the modified normalised mean bias and of 0.30 for the fractional gross error. Tests have also shown that the ensemble median is robust to reduction of ensemble size by one, that is, if predictions are unavailable from one model. Scores are also discussed for PM10 for winter 2013–1014. There is an underestimation of most models leading the ensemble median to a mean bias of −4.5 μg m−3. The ensemble median fractional gross error is larger for PM10 (~ 0.52) than for ozone and the correlation is lower (~ 0.35 for PM10 and ~ 0.54 for ozone). This is related to a larger spread of the seven model scores for PM10 than for ozone linked to different levels of complexity of aerosol representation in the individual models. In parallel, a scientific analysis of the results of the seven models and of the ensemble is also done over the Mediterranean area because of the specificity of its meteorology and emissions. The system is robust in terms of the production availability. Major efforts have been done in MACC-II towards the operationalisation of all its components. Foreseen developments and research for improving its performances are discussed in the conclusion.

Journal ArticleDOI
Vardan Khachatryan1, Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam  +2122 moreInstitutions (140)
TL;DR: In this article, the normalized differential cross section for top quark pair (tt) production is measured in pp collisions at a centre-of-mass energy of 8TeV at the CERN LHC using the CMS detector in data corresponding to an integrated luminosity of 19.7fb^(−1).
Abstract: The normalized differential cross section for top quark pair (tt) production is measured in pp collisions at a centre-of-mass energy of 8TeV at the CERN LHC using the CMS detector in data corresponding to an integrated luminosity of 19.7fb^(−1). The measurements are performed in the lepton+jets (e/μ +jets) and in the dilepton (e^+e^−, μ^+μ^−, and e^±μ^∓) decay channels. The tt cross section is measured as a function of the kinematic properties of the charged leptons, the jets associated to b quarks, the top quarks, and the tt system. The data are compared with several predictions from perturbative quantum chromodynamic up to approximate next-to-next-to-leading-order precision. No significant deviations are observed relative to the standard model predictions.

Journal ArticleDOI
TL;DR: A mechanism through which chirality-sorting optical forces emerge through the interaction with the spin-angular momentum of light, a property that the community has recently learned to control with great sophistication using modern nanophotonics is described.
Abstract: The transverse component of the spin angular momentum of evanescent waves gives rise to lateral optical forces on chiral particles, which have the unusual property of acting in a direction in which there is neither a field gradient nor wave propagation. Because their direction and strength depends on the chiral polarizability of the particle, they act as chirality-sorting and may offer a mechanism for passive chirality spectroscopy. The absolute strength of the forces also substantially exceeds that of other recently predicted sideways optical forces.