scispace - formally typeset
Search or ask a question

Showing papers by "Carleton University published in 2012"


Journal ArticleDOI
Georges Aad1, T. Abajyan2, Brad Abbott3, Jalal Abdallah4  +2964 moreInstitutions (200)
TL;DR: In this article, a search for the Standard Model Higgs boson in proton-proton collisions with the ATLAS detector at the LHC is presented, which has a significance of 5.9 standard deviations, corresponding to a background fluctuation probability of 1.7×10−9.

9,282 citations


Journal ArticleDOI
Kaoru Hagiwara, Ken Ichi Hikasa1, Koji Nakamura, Masaharu Tanabashi1, M. Aguilar-Benitez, Claude Amsler2, R. M. Barnett3, P. R. Burchat4, C. D. Carone5, C. Caso6, G. Conforto7, Olav Dahl3, Michael Doser8, Semen Eidelman9, Jonathan L. Feng10, L. K. Gibbons11, M. C. Goodman12, Christoph Grab13, D. E. Groom3, Atul Gurtu14, Atul Gurtu8, K. G. Hayes15, J.J. Hernández-Rey16, K. Honscheid17, Christopher Kolda18, Michelangelo L. Mangano8, D. M. Manley19, Aneesh V. Manohar20, John March-Russell8, Alberto Masoni, Ramon Miquel3, Klaus Mönig, Hitoshi Murayama3, Hitoshi Murayama21, S. Sánchez Navas13, Keith A. Olive22, Luc Pape8, C. Patrignani6, A. Piepke23, Matts Roos24, John Terning25, Nils A. Tornqvist24, T. G. Trippe3, Petr Vogel26, C. G. Wohl3, Ron L. Workman27, W-M. Yao3, B. Armstrong3, P. S. Gee3, K. S. Lugovsky, S. B. Lugovsky, V. S. Lugovsky, Marina Artuso28, D. Asner29, K. S. Babu30, E. L. Barberio8, Marco Battaglia8, H. Bichsel31, O. Biebel32, P. Bloch8, Robert N. Cahn3, Ariella Cattai8, R.S. Chivukula33, R. Cousins34, G. A. Cowan35, Thibault Damour36, K. Desler, R. J. Donahue3, D. A. Edwards, Victor Daniel Elvira37, Jens Erler38, V. V. Ezhela, A Fassò8, W. Fetscher13, Brian D. Fields39, B. Foster40, Daniel Froidevaux8, Masataka Fukugita41, Thomas K. Gaisser42, L. A. Garren37, H J Gerber13, Frederick J. Gilman43, Howard E. Haber44, C. A. Hagmann29, J.L. Hewett4, Ian Hinchliffe3, Craig J. Hogan31, G. Höhler45, P. Igo-Kemenes46, John David Jackson3, Kurtis F Johnson47, D. Karlen48, B. Kayser37, S. R. Klein3, Konrad Kleinknecht49, I.G. Knowles50, P. Kreitz4, Yu V. Kuyanov, R. Landua8, Paul Langacker38, L. S. Littenberg51, Alan D. Martin52, Tatsuya Nakada8, Tatsuya Nakada53, Meenakshi Narain33, Paolo Nason, John A. Peacock54, H. R. Quinn55, Stuart Raby17, Georg G. Raffelt32, E. A. Razuvaev, B. Renk49, L. Rolandi8, Michael T Ronan3, L.J. Rosenberg54, C.T. Sachrajda55, A. I. Sanda56, Subir Sarkar57, Michael Schmitt58, O. Schneider53, Douglas Scott59, W. G. Seligman60, M. H. Shaevitz60, Torbjörn Sjöstrand61, George F. Smoot3, Stefan M Spanier4, H. Spieler3, N. J. C. Spooner62, Mark Srednicki63, Achim Stahl, Todor Stanev42, M. Suzuki3, N. P. Tkachenko, German Valencia64, K. van Bibber29, Manuella Vincter65, D. R. Ward66, Bryan R. Webber66, M R Whalley52, Lincoln Wolfenstein43, J. Womersley37, C. L. Woody51, Oleg Zenin 
Tohoku University1, University of Zurich2, Lawrence Berkeley National Laboratory3, Stanford University4, College of William & Mary5, University of Genoa6, University of Urbino7, CERN8, Budker Institute of Nuclear Physics9, University of California, Irvine10, Cornell University11, Argonne National Laboratory12, ETH Zurich13, Tata Institute of Fundamental Research14, Hillsdale College15, Spanish National Research Council16, Ohio State University17, University of Notre Dame18, Kent State University19, University of California, San Diego20, University of California, Berkeley21, University of Minnesota22, University of Alabama23, University of Helsinki24, Los Alamos National Laboratory25, California Institute of Technology26, George Washington University27, Syracuse University28, Lawrence Livermore National Laboratory29, Oklahoma State University–Stillwater30, University of Washington31, Max Planck Society32, Boston University33, University of California, Los Angeles34, Royal Holloway, University of London35, Université Paris-Saclay36, Fermilab37, University of Pennsylvania38, University of Illinois at Urbana–Champaign39, University of Bristol40, University of Tokyo41, University of Delaware42, Carnegie Mellon University43, University of California, Santa Cruz44, Karlsruhe Institute of Technology45, Heidelberg University46, Florida State University47, Carleton University48, University of Mainz49, University of Edinburgh50, Brookhaven National Laboratory51, Durham University52, University of Lausanne53, Massachusetts Institute of Technology54, University of Southampton55, Nagoya University56, University of Oxford57, Northwestern University58, University of British Columbia59, Columbia University60, Lund University61, University of Sheffield62, University of California, Santa Barbara63, Iowa State University64, University of Alberta65, University of Cambridge66
TL;DR: The Particle Data Group's biennial review as mentioned in this paper summarizes much of particle physics, using data from previous editions, plus 2658 new measurements from 644 papers, and lists, evaluates, and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons.
Abstract: This biennial Review summarizes much of particle physics. Using data from previous editions, plus 2658 new measurements from 644 papers, we list, evaluate, and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons. We summarize searches for hypothetical particles such as Higgs bosons, heavy neutrinos, and supersymmetric particles. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as the Standard Model, particle detectors, probability, and statistics. Among the 112 reviews are many that are new or heavily revised including those on Heavy-Quark and Soft-Collinear Effective Theory, Neutrino Cross Section Measurements, Monte Carlo Event Generators, Lattice QCD, Heavy Quarkonium Spectroscopy, Top Quark, Dark Matter, V-cb & V-ub, Quantum Chromodynamics, High-Energy Collider Parameters, Astrophysical Constants, Cosmological Parameters, and Dark Matter. A booklet is available containing the Summary Tables and abbreviated versions of some of the other sections of this full Review. All tables, listings, and reviews (and errata) are also available on the Particle Data Group website: http://pdg.lbl.gov.

4,465 citations


Journal ArticleDOI
TL;DR: This review uses knowledge gained from human‐modified landscapes to suggest eight hypotheses, which it hopes will encourage more systematic research on the role of landscape composition and configuration in determining the structure of ecological communities, ecosystem functioning and services.
Abstract: Understanding how landscape characteristics affect biodiversity patterns and ecological processes at local and landscape scales is critical for mitigating effects of global environmental change. In this review, we use knowledge gained from human-modified landscapes to suggest eight hypotheses, which we hope will encourage more systematic research on

1,513 citations


Journal ArticleDOI
TL;DR: It is argued that an “applied forward reasoning” approach is better suited for social scientists seeking to address climate change, which is characterized as a “super wicked” problem comprising four key features: time is running out, those who cause the problem also seek to provide a solution, the central authority needed to address it is weak or non-existent, and policy responses discount the future irrationally.
Abstract: Most policy-relevant work on climate change in the social sciences either analyzes costs and benefits of particular policy options against important but often narrow sets of objectives or attempts to explain past successes or failures. We argue that an “applied forward reasoning” approach is better suited for social scientists seeking to address climate change, which we characterize as a “super wicked” problem comprising four key features: time is running out; those who cause the problem also seek to provide a solution; the central authority needed to address it is weak or non-existent; and, partly as a result, policy responses discount the future irrationally. These four features combine to create a policy-making “tragedy” where traditional analytical techniques are ill equipped to identify solutions, even when it is well recognized that actions must take place soon to avoid catastrophic future impacts. To overcome this tragedy, greater attention must be given to the generation of path-dependent policy interventions that can “constrain our future collective selves.” Three diagnostic questions result that orient policy analysis toward understanding how to trigger sticky interventions that, through progressive incremental trajectories, entrench support over time while expanding the populations they cover. Drawing especially from the literature on path dependency, but inverting it to develop policy responses going forward, we illustrate the plausibility of our framework for identifying new areas of research and new ways to think about policy interventions to address super wicked problems.

1,013 citations


Proceedings ArticleDOI
20 May 2012
TL;DR: It is concluded that many academic proposals to replace text passwords for general-purpose user authentication on the web have failed to gain traction because researchers rarely consider a sufficiently wide range of real-world constraints.
Abstract: We evaluate two decades of proposals to replace text passwords for general-purpose user authentication on the web using a broad set of twenty-five usability, deployability and security benefits that an ideal scheme might provide. The scope of proposals we survey is also extensive, including password management software, federated login protocols, graphical password schemes, cognitive authentication schemes, one-time passwords, hardware tokens, phone-aided schemes and biometrics. Our comprehensive approach leads to key insights about the difficulty of replacing passwords. Not only does no known scheme come close to providing all desired benefits: none even retains the full set of benefits that legacy passwords already provide. In particular, there is a wide range from schemes offering minor security benefits beyond legacy passwords, to those offering significant security benefits in return for being more costly to deploy or more difficult to use. We conclude that many academic proposals have failed to gain traction because researchers rarely consider a sufficiently wide range of real-world constraints. Beyond our analysis of current schemes, our framework provides an evaluation methodology and benchmark for future web authentication proposals.

914 citations


Journal ArticleDOI
TL;DR: In this article, the authors identify seven generic policy-relevant principles for enhancing the resilience of desired ES in the face of disturbance and ongoing change in social-ecological systems (SES).
Abstract: Enhancing the resilience of ecosystem services (ES) that underpin human well-being is critical for meeting current and future societal needs, and requires specific governance and management policies. Using the literature, we identify seven generic policy-relevant principles for enhancing the resilience of desired ES in the face of disturbance and ongoing change in social-ecological systems (SES). These principles are (P1) maintain diversity and redundancy, (P2) manage connectivity, (P3) manage slow variables and feedbacks, (P4) foster an understanding of SES as complex adaptive systems (CAS), (P5) encourage learning and experimentation, (P6) broaden participation, and (P7) promote polycentric governance systems. We briefly define each principle, review how and when it enhances the resilience of ES, and conclude with major research gaps. In practice, the principles often co-occur and are highly interdependent. Key future needs are to better understand these interdependencies and to operationalize and apply...

872 citations


Journal ArticleDOI
TL;DR: This article first catalogues existing approaches, highlighting novel features of selected schemes and identifying key usability or security advantages, and reviews usability requirements for knowledge-based authentication as they apply to graphical passwords.
Abstract: Starting around 1999, a great many graphical password schemes have been proposed as alternatives to text-based password authentication. We provide a comprehensive overview of published research in the area, covering both usability and security aspects as well as system evaluation. The article first catalogues existing approaches, highlighting novel features of selected schemes and identifying key usability or security advantages. We then review usability requirements for knowledge-based authentication as they apply to graphical passwords, identify security threats that such systems must address and review known attacks, discuss methodological issues related to empirical evaluation, and identify areas for further research and improved methodology.

635 citations


Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, S. Abdel Khalek  +3081 moreInstitutions (197)
TL;DR: A combined search for the Standard Model Higgs boson with the ATLAS experiment at the LHC using datasets corresponding to integrated luminosities from 1.04 fb(-1) to 4.9 fb(1) of pp collisions is described in this paper.

572 citations


Proceedings ArticleDOI
29 Oct 2012
TL;DR: This paper proposes an algorithm, called HUI-Miner (High Utility Itemset Miner), which can efficiently mine high utility itemsets from the utility-lists constructed from a mined database and compares it with the state-of-the-art algorithms on various databases.
Abstract: High utility itemsets refer to the sets of items with high utility like profit in a database, and efficient mining of high utility itemsets plays a crucial role in many real-life applications and is an important research issue in data mining area. To identify high utility itemsets, most existing algorithms first generate candidate itemsets by overestimating their utilities, and subsequently compute the exact utilities of these candidates. These algorithms incur the problem that a very large number of candidates are generated, but most of the candidates are found out to be not high utility after their exact utilities are computed. In this paper, we propose an algorithm, called HUI-Miner (High Utility Itemset Miner), for high utility itemset mining. HUI-Miner uses a novel structure, called utility-list, to store both the utility information about an itemset and the heuristic information for pruning the search space of HUI-Miner. By avoiding the costly generation and utility computation of numerous candidate itemsets, HUI-Miner can efficiently mine high utility itemsets from the utility-lists constructed from a mined database. We compared HUI-Miner with the state-of-the-art algorithms on various databases, and experimental results show that HUI-Miner outperforms these algorithms in terms of both running time and memory consumption.

539 citations


Journal ArticleDOI
Georges Aad, B. Abbott1, Jalal Abdallah2, A. A. Abdelalim3  +3013 moreInstitutions (174)
TL;DR: In this article, detailed measurements of the electron performance of the ATLAS detector at the LHC were reported, using decays of the Z, W and J/psi particles.
Abstract: Detailed measurements of the electron performance of the ATLAS detector at the LHC are reported, using decays of the Z, W and J/psi particles. Data collected in 2010 at root s = 7 TeV are used, corresponding to an integrated luminosity of almost 40 pb(-1). The inter-alignment of the inner detector and the electromagnetic calorimeter, the determination of the electron energy scale and resolution, and the performance in terms of response uniformity and linearity are discussed. The electron identification, reconstruction and trigger efficiencies, as well as the charge misidentification probability, are also presented.

505 citations


Journal ArticleDOI
Daniele S. M. Alves1, Nima Arkani-Hamed, S. Arora2, Yang Bai1, Matthew Baumgart3, Joshua Berger4, Matthew R. Buckley5, Bart Butler1, Spencer Chang6, Spencer Chang7, Hsin-Chia Cheng7, Clifford Cheung8, R. Sekhar Chivukula9, Won Sang Cho10, R. Cotta1, Mariarosaria D'Alfonso11, Sonia El Hedri1, Rouven Essig12, Jared A. Evans7, Liam Fitzpatrick13, Patrick J. Fox5, Roberto Franceschini14, Ayres Freitas15, James S. Gainer16, James S. Gainer17, Yuri Gershtein2, R. N.C. Gray2, Thomas Gregoire18, Ben Gripaios19, J.F. Gunion7, Tao Han20, Andy Haas1, P. Hansson1, JoAnne L. Hewett1, Dmitry Hits2, Jay Hubisz21, Eder Izaguirre1, Jared Kaplan1, Emanuel Katz13, Can Kilic2, Hyung Do Kim22, Ryuichiro Kitano23, Sue Ann Koay11, Pyungwon Ko24, David Krohn25, Eric Kuflik26, Ian M. Lewis20, Mariangela Lisanti27, Tao Liu11, Zhen Liu20, Ran Lu26, Markus A. Luty7, Patrick Meade12, David E. Morrissey28, Stephen Mrenna5, Mihoko M. Nojiri, Takemichi Okui29, Sanjay Padhi30, Michele Papucci31, Michael Park2, Myeonghun Park32, Maxim Perelstein4, Michael E. Peskin1, Daniel J. Phalen7, Keith Rehermann33, Vikram Rentala34, Vikram Rentala35, Tuhin S. Roy36, Joshua T. Ruderman27, Veronica Sanz37, Martin Schmaltz13, S. Schnetzer2, Philip Schuster38, Pedro Schwaller16, Pedro Schwaller39, Pedro Schwaller40, Matthew D. Schwartz25, Ariel Schwartzman1, Jing Shao21, J. Shelton41, David Shih2, Jing Shu10, Daniel Silverstein1, Elizabeth H. Simmons9, Sunil Somalwar2, Michael Spannowsky6, Christian Spethmann13, Matthew J. Strassler2, Shufang Su35, Shufang Su34, Tim M. P. Tait34, Brooks Thomas42, Scott Thomas2, Natalia Toro38, Tomer Volansky8, Jay G. Wacker1, Wolfgang Waltenberger43, Itay Yavin44, Felix Yu34, Yue Zhao2, Kathryn M. Zurek26 
TL;DR: A collection of simplified models relevant to the design of new-physics searches at the Large Hadron Collider (LHC) and the characterization of their results is presented in this paper.
Abstract: This document proposes a collection of simplified models relevant to the design of new-physics searches at the Large Hadron Collider (LHC) and the characterization of their results. Both ATLAS and CMS have already presented some results in terms of simplified models, and we encourage them to continue and expand this effort, which supplements both signature-based results and benchmark model interpretations. A simplified model is defined by an effective Lagrangian describing the interactions of a small number of new particles. Simplified models can equally well be described by a small number of masses and cross-sections. These parameters are directly related to collider physics observables, making simplified models a particularly effective framework for evaluating searches and a useful starting point for characterizing positive signals of new physics. This document serves as an official summary of the results from the 'Topologies for Early LHC Searches' workshop, held at SLAC in September of 2010, the purpose of which was to develop a set of representative models that can be used to cover all relevant phase space in experimental searches. Particular emphasis is placed on searches relevant for the first similar to 50-500 pb(-1) of data and those motivated by supersymmetric models. This note largely summarizes material posted at http://lhcnewphysics.org/, which includes simplified model definitions, Monte Carlo material, and supporting contacts within the theory community. We also comment on future developments that may be useful as more data is gathered and analyzed by the experiments.

Journal ArticleDOI
TL;DR: This paper presents a hierarchical classification of TVGs; each class corresponds to a significant property examined in the distributed computing literature, and examines how TVGs can be used to study the evolution of network properties, and proposes different techniques, depending on whether the indicators for these properties are atemporal or temporal.
Abstract: The past few years have seen intensive research efforts carried out in some apparently unrelated areas of dynamic systems – delay-tolerant networks, opportunistic-mobility networks and social networks – obtaining closely related insights. Indeed, the concepts discovered in these investigations can be viewed as parts of the same conceptual universe, and the formal models proposed so far to express some specific concepts are the components of a larger formal description of this universe. The main contribution of this paper is to integrate the vast collection of concepts, formalisms and results found in the literature into a unified framework, which we call time-varying graphs TVGs. Using this framework, it is possible to express directly in the same formalism not only the concepts common to all those different areas, but also those specific to each. Based on this definitional work, employing both existing results and original observations, we present a hierarchical classification of TVGs; each class corresponds to a significant property examined in the distributed computing literature. We then examine how TVGs can be used to study the evolution of network properties, and propose different techniques, depending on whether the indicators for these properties are atemporal as in the majority of existing studies or temporal. Finally, we briefly discuss the introduction of randomness in TVGs.

Journal ArticleDOI
TL;DR: In this paper, the authors examine the associations among different quality management (QM) practices and investigate which QM practices directly or indirectly relate to five types of innovation: radical product, radical process, incremental product, incremental process, and administrative innovation.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, J. Abdallah3, S. Abdel Khalek4  +3073 moreInstitutions (193)
TL;DR: In this paper, a Fourier analysis of the charged particle pair distribution in relative azimuthal angle (Delta phi = phi(a)-phi(b)) is performed to extract the coefficients v(n,n) =.
Abstract: Differential measurements of charged particle azimuthal anisotropy are presented for lead-lead collisions at root sNN = 2.76 TeV with the ATLAS detector at the LHC, based on an integrated luminosity of approximately 8 mu b(-1). This anisotropy is characterized via a Fourier expansion of the distribution of charged particles in azimuthal angle relative to the reaction plane, with the coefficients v(n) denoting the magnitude of the anisotropy. Significant v(2)-v(6) values are obtained as a function of transverse momentum (0.5 = 3 are found to vary weakly with both eta and centrality, and their p(T) dependencies are found to follow an approximate scaling relation, v(n)(1/n)(p(T)) proportional to v(2)(1/2)(p(T)), except in the top 5% most central collisions. A Fourier analysis of the charged particle pair distribution in relative azimuthal angle (Delta phi = phi(a)-phi(b)) is performed to extract the coefficients v(n,n) = . For pairs of charged particles with a large pseudorapidity gap (|Delta eta = eta(a) - eta(b)| > 2) and one particle with p(T) < 3 GeV, the v(2,2)-v(6,6) values are found to factorize as v(n,n)(p(T)(a), p(T)(b)) approximate to v(n) (p(T)(a))v(n)(p(T)(b)) in central and midcentral events. Such factorization suggests that these values of v(2,2)-v(6,6) are primarily attributable to the response of the created matter to the fluctuations in the geometry of the initial state. A detailed study shows that the v(1,1)(p(T)(a), p(T)(b)) data are consistent with the combined contributions from a rapidity-even v(1) and global momentum conservation. A two-component fit is used to extract the v(1) contribution. The extracted v(1) isobserved to cross zero at pT approximate to 1.0 GeV, reaches a maximum at 4-5 GeV with a value comparable to that for v(3), and decreases at higher p(T).

Journal ArticleDOI
TL;DR: The charge of Task Group 186 (TG-186) is to provide guidance for early adopters of model-based dose calculation algorithms (MBDCAs) for brachytherapy (BT) dose calculations to ensure practice uniformity, with explicit recommendations based on the current state of knowledge.
Abstract: The charge of Task Group 186 (TG-186) is to provide guidance for early adopters of model-based dose calculation algorithms (MBDCAs) for brachytherapy (BT) dose calculations to ensure practice uniformity. Contrary to external beam radiotherapy, heterogeneity correction algorithms have only recently been made available to the BT community. Yet, BT dose calculation accuracy is highly dependent on scatter conditions and photoelectric effect cross-sections relative to water. In specific situations, differences between the current water-based BT dose calculation formalism (TG-43) and MBDCAs can lead to differences in calculated doses exceeding a factor of 10. MBDCAs raise three major issues that are not addressed by current guidance documents: (1) MBDCA calculated doses are sensitive to the dose specification medium, resulting in energy-dependent differences between dose calculated to water in a homogeneous water geometry (TG-43), dose calculated to the local medium in the heterogeneous medium, and the intermediate scenario of dose calculated to a small volume of water in the heterogeneous medium. (2) MBDCA doses are sensitive to voxel-by-voxel interaction cross sections. Neither conventional single-energy CT nor ICRU∕ICRP tissue composition compilations provide useful guidance for the task of assigning interaction cross sections to each voxel. (3) Since each patient-source-applicator combination is unique, having reference data for each possible combination to benchmark MBDCAs is an impractical strategy. Hence, a new commissioning process is required. TG-186 addresses in detail the above issues through the literature review and provides explicit recommendations based on the current state of knowledge. TG-43-based dose prescription and dose calculation remain in effect, with MBDCA dose reporting performed in parallel when available. In using MBDCAs, it is recommended that the radiation transport should be performed in the heterogeneous medium and, at minimum, the dose to the local medium be reported along with the TG-43 calculated doses. Assignments of voxel-by-voxel cross sections represent a particular challenge. Electron density information is readily extracted from CT imaging, but cannot be used to distinguish between different materials having the same density. Therefore, a recommendation is made to use a number of standardized materials to maintain uniformity across institutions. Sensitivity analysis shows that this recommendation offers increased accuracy over TG-43. MBDCA commissioning will share commonalities with current TG-43-based systems, but in addition there will be algorithm-specific tasks. Two levels of commissioning are recommended: reproducing TG-43 dose parameters and testing the advanced capabilities of MBDCAs. For validation of heterogeneity and scatter conditions, MBDCAs should mimic the 3D dose distributions from reference virtual geometries. Potential changes in BT dose prescriptions and MBDCA limitations are discussed. When data required for full MBDCA implementation are insufficient, interim recommendations are made and potential areas of research are identified. Application of TG-186 guidance should retain practice uniformity in transitioning from the TG-43 to the MBDCA approach.

Journal ArticleDOI
Georges Aad1, Georges Aad2, Brad Abbott1, Brad Abbott3  +5592 moreInstitutions (189)
TL;DR: The ATLAS trigger system as discussed by the authors selects events by rapidly identifying signatures of muon, electron, photon, tau lepton, jet, and B meson candidates, as well as using global event signatures, such as missing transverse energy.
Abstract: Proton-proton collisions at root s = 7 TeV and heavy ion collisions at root(NN)-N-s = 2.76 TeV were produced by the LHC and recorded using the ATLAS experiment's trigger system in 2010. The LHC is designed with a maximum bunch crossing rate of 40 MHz and the ATLAS trigger system is designed to record approximately 200 of these per second. The trigger system selects events by rapidly identifying signatures of muon, electron, photon, tau lepton, jet, and B meson candidates, as well as using global event signatures, such as missing transverse energy. An overview of the ATLAS trigger system, the evolution of the system during 2010 and the performance of the trigger system components and selections based on the 2010 collision data are shown. A brief outline of plans for the trigger system in 2011 is presented.

Journal ArticleDOI
16 Mar 2012-Science
TL;DR: The United Nations conference in Rio de Janeiro in June is an important opportunity to improve the institutional framework for sustainable development and requires fundamental reorientation and restructuring of national and international institutions toward more effective Earth system governance and planetary stewardship.
Abstract: Science assessments indicate that human activities are moving several of Earth's sub-systems outside the range of natural variability typical for the previous 500,000 years ( 1 , 2 ). Human societies must now change course and steer away from critical tipping points in the Earth system that might lead to rapid and irreversible change ( 3 ). This requires fundamental reorientation and restructuring of national and international institutions toward more effective Earth system governance and planetary stewardship.

Journal ArticleDOI
TL;DR: The technical challenges of aptamer development for small molecule targets, as well as the opportunities that exist for their application in biosensing and chemical biology are highlighted.
Abstract: Aptamers are single-stranded oligonucleotides that bind to targets with high affinity and selectivity. Their use as molecular recognition elements has emerged as a viable approach for biosensing, diagnostics, and therapeutics. Despite this potential, relatively few aptamers exist that bind to small molecules. Small molecules are important targets for investigation due to their diverse biological functions as well as their clinical and commercial uses. Novel, effective molecular recognition probes for these compounds are therefore of great interest. This paper will highlight the technical challenges of aptamer development for small molecule targets, as well as the opportunities that exist for their application in biosensing and chemical biology.

Journal ArticleDOI
TL;DR: This work sets a lower limit on the half-life of the neutrinoless double-beta decay T(1/2)(0νββ)(136Xe)>1.6×10(25) yr (90% C.L.), corresponding to effective Majorana masses of less than 140-380 meV, depending on the matrix element calculation.
Abstract: Several properties of neutrinos, such as their absolute mass, their possible Majorana nature or the mechanisms that lead to small neutrino masses, are still unknown. The EXO-200 experiment is trying to answer some of these questions by searching for the hypothetical neutrinoless double beta decay of the isotope 136 Xe. This thesis describes an analysis of two years of detector data, which yields a lower limit on the half-life of neutrinoless double beta decay of 136 Xe of 1.1·10 25 years.

Journal ArticleDOI
TL;DR: In this paper, living labs are networks that can help companies create innovations that have a superior match with user needs and can be upscaled promptly to the global market, and this knowledge will help them to identify which actor drives the innovation, to anticipate likely outcomes, and to decide what kind of role they should play while "living labbing".
Abstract: Living labs bring experimentation out of companies’ RD this knowledge will help them to identify which actor drives the innovation, to anticipate likely outcomes, and to decide what kind of role they should play while "living labbing". Living labs are networks that can help them create innovations that have a superior match with user needs and can be upscaled promptly to the global market. By living labs, we mean reconstructing the interaction space. It can be any space, anywhere, suitable for collaborative design, the application of knowledge for empowerment, uplift, and development of people and communities for the use of innovation. An interviewee in this study

Journal ArticleDOI
TL;DR: The chronic mild stress paradigm evokes an array of neurobiological changes that mirror those seen in depressive disorders and may be a suitable tool to investigate novel systems that could be disturbed in depression, and thus aid in the development of novel targets for the treatment of depression.

Journal ArticleDOI
TL;DR: The authors compared seven structural DSGE models to discretionary fiscal stimulus shocks using seven different fiscal instruments, and compared the results to those of two prominent academic models, such as JEL E12, E13, E52, and E62.
Abstract: The paper subjects seven structural DSGE models, all used heavily by policymaking institutions, to discretionary fiscal stimulus shocks using seven different fiscal instruments, and compares the results to those of two prominent academic DSGE models. There is considerable agreement across models on both the absolute and relative sizes of different types of fiscal multipliers. The size of many multipliers is large, particularly for spending and targeted transfers. Fiscal policy is most effective if it has moderate persistence and if monetary policy is accommodative. Permanently higher spending or deficits imply significantly lower initial multipliers. (JEL E12, E13, E52, E62)

Journal ArticleDOI
TL;DR: In this paper, the authors evaluated simulated, daily average gross primary productivity (GPP) from 26 models against estimated GPP at 39 eddy covariance flux tower sites across the United States and Canada.
Abstract: [1] Accurately simulating gross primary productivity (GPP) in terrestrial ecosystem models is critical because errors in simulated GPP propagate through the model to introduce additional errors in simulated biomass and other fluxes. We evaluated simulated, daily average GPP from 26 models against estimated GPP at 39 eddy covariance flux tower sites across the United States and Canada. None of the models in this study match estimated GPP within observed uncertainty. On average, models overestimate GPP in winter, spring, and fall, and underestimate GPP in summer. Models overpredicted GPP under dry conditions and for temperatures below 0°C. Improvements in simulated soil moisture and ecosystem response to drought or humidity stress will improve simulated GPP under dry conditions. Adding a low-temperature response to shut down GPP for temperatures below 0°C will reduce the positive bias in winter, spring, and fall and improve simulated phenology. The negative bias in summer and poor overall performance resulted from mismatches between simulated and observed light use efficiency (LUE). Improving simulated GPP requires better leaf-to-canopy scaling and better values of model parameters that control the maximum potential GPP, such asemax (LUE), Vcmax (unstressed Rubisco catalytic capacity) or Jmax (the maximum electron transport rate).

Journal ArticleDOI
TL;DR: The authors argues that these exclusions and orientations lead scholars to systematically overlook the immense importance of resource extraction and shipping as human dimensions of climatic change in the Canadian Arctic, and examines the implications of such oversights.
Abstract: Over the past decade research examining the human dimensions of climatic change in the Arctic has expanded significantly and has become the dominant framework through which the relations between northern peoples and climatic change are understood by scholars, policy makers, political leaders, and the media. This paper critically examines the assumptions, exclusions, and orientations that characterize this broad literature, and suggests revising and expanding the terms upon which it is carried out. It focuses in particular on the exclusion of colonialism from the study of human vulnerability and adaptation to climatic change, the framing of Indigenous peoples and communities in terms of the local and the traditional, and the ways in which efforts to improve the lives of northern Indigenous peoples risk perpetuating colonial relations. The paper argues that these exclusions and orientations lead scholars to systematically overlook the immense importance of resource extraction and shipping as human dimensions of climatic change in the Canadian Arctic, and it examines the implications of such oversights.

Journal ArticleDOI
TL;DR: Most economists and academics support the notion that entrepreneurship is becoming a crucial factor in the development and well-being of societies as discussed by the authors, and support the adoption of entrepreneurship in factordriven, efficiency-driven, or innovation-driven economies.
Abstract: Most economists and academics support the notion that entrepreneurship is becoming a crucial factor in the development and well-being of societies. Whether the entrepreneurial activities are practiced in factordriven, efficiency-driven, or innovation-driven economies (Porter et al., 2002; tinyurl.com/7vwutgr), the ultimate results continue to exhibit: i) lower unemployment rates; ii) increased tendency to adopt innovation; and iii) accelerated structural changes in the economy. Entrepreneurship offers new competition, and as such promotes improved productivity and healthy economic competitiveness (UNCTAD, 2004; tinyurl.com/d3xkdj4).

Journal ArticleDOI
TL;DR: This is the first study to generate testable hypotheses concerning the mechanisms underlying the scale at which populations respond to the landscape, and predicts how species traits influence the scale of effect.
Abstract: The spatial extent at which landscape structure best predicts population response, called the scale of effect, varies across species. An ability to predict the scale of effect of a landscape using species traits would make landscape study design more efficient and would enable landscape managers to plan at the appropriate scale. We used an individual based simulation model to predict how species traits influence the scale of effect. Specifically, we tested the effects of dispersal distance, reproductive rate, and informed movement behavior on the radius at which percent habitat cover best predicts population abundance in a focal area. Scale of effect for species with random movement behavior was compared to scale of effect for species with three (cumulative) levels of information use during dispersal: habitat based settlement, conspecific density based settlement, and gap-avoidance during movement. Consistent with a common belief among researchers, dispersal distance had a strong, positive influence on scale of effect. A general guideline for empiricists is to expect the radius of a landscape to be 4–9 times the median dispersal distance or 0.3–0.5 times the maximum dispersal distance of a species. Informed dispersal led to greater increases in population size than did increased reproductive rate. Similarly, informed dispersal led to more strongly decreased scales of effect than did reproductive rate. Most notably, gap-avoidance resulted in scales that were 0.2–0.5 times those of non-avoidant species. This is the first study to generate testable hypotheses concerning the mechanisms underlying the scale at which populations respond to the landscape.

Journal ArticleDOI
TL;DR: This paper forms the energy-efficient resource allocation problem in heterogeneous cognitive radio networks with femtocells as a Stackelberg game and proposes a gradient based iteration algorithm to obtain the StACkelberg equilibrium solution.
Abstract: Both cognitive radio and femtocell have been considered as promising techniques in wireless networks. However, most of previous works are focused on spectrum sharing and interference avoidance, and the energy efficiency aspect is largely ignored. In this paper, we study the energy efficiency aspect of spectrum sharing and power allocation in heterogeneous cognitive radio networks with femtocells. To fully exploit the cognitive capability, we consider a wireless network architecture in which both the macrocell and the femtocell have the cognitive capability. We formulate the energy-efficient resource allocation problem in heterogeneous cognitive radio networks with femtocells as a Stackelberg game. A gradient based iteration algorithm is proposed to obtain the Stackelberg equilibrium solution to the energy-efficient resource allocation problem. Simulation results are presented to demonstrate the Stackelberg equilibrium is obtained by the proposed iteration algorithm and energy efficiency can be improved significantly in the proposed scheme.

Journal ArticleDOI
04 Sep 2012
TL;DR: This survey presents a comprehensive list of major known security threats within a cognitive radio network (CRN) framework, namely exogenous (external) attackers, intruding malicious nodes and greedy cognitive radios (CRs), and discusses potential solutions to combat those attacks.
Abstract: In this survey, we present a comprehensive list of major known security threats within a cognitive radio network (CRN) framework. We classify attack techniques based on the type of attacker, namely exogenous (external) attackers, intruding malicious nodes and greedy cognitive radios (CRs). We further discuss threats related to infrastructure-based CRNs as well as infrastructure-less networks. Besides the short-term effects of attacks over CRN performance, we also discuss the often ignored longer term behavioral changes that are enforced by such attacks via the learning capability of CRN. After elaborating on various attack strategies, we discuss potential solutions to combat those attacks. An overview of robust CR communications is also presented. We finally elaborate on future research directions pertinent to CRN security. We hope this survey paper can provide the insight and the roadmap for future research efforts in the emerging field of CRN security.

Journal ArticleDOI
Georges Aad1, Georges Aad2, Brad Abbott1, Brad Abbott3  +5559 moreInstitutions (188)
TL;DR: In this paper, the performance of the missing transverse momentum reconstruction was evaluated using data collected in pp collisions at a centre-of-mass energy of 7 TeV in 2010.
Abstract: The measurement of missing transverse momentum in the ATLAS detector, described in this paper, makes use of the full event reconstruction and a calibration based on reconstructed physics objects. The performance of the missing transverse momentum reconstruction is evaluated using data collected in pp collisions at a centre-of-mass energy of 7 TeV in 2010. Minimum bias events and events with jets of hadrons are used from data samples corresponding to an integrated luminosity of about 0.3 nb(-1) and 600 nb(-1) respectively, together with events containing a Z boson decaying to two leptons (electrons or muons) or a W boson decaying to a lepton (electron or muon) and a neutrino, from a data sample corresponding to an integrated luminosity of about 36 pb(-1). An estimate of the systematic uncertainty on the missing transverse momentum scale is presented.

Journal ArticleDOI
01 Jan 2012
TL;DR: It is argued that no silver bullet will meet all requirements-not only will passwords be with us for some time, but in many instances, they're the solution that best fits the scenario of use.
Abstract: Despite countless attempts and near-universal desire to replace them, passwords are more widely used and firmly entrenched than ever. The authors' exploration leads them to argue that no silver bullet will meet all requirements-not only will passwords be with us for some time, but in many instances, they're the solution that best fits the scenario of use. Among broad authentication research directions to follow, they first suggest better means to concretely identify actual requirements (surprisingly overlooked to date) and weight their relative importance in target scenarios. Second, for scenarios where passwords appear to be the best-fit solution, they suggest designing better means to support them. The authors also highlight the need for more systematic research and how the premature conclusion that passwords are dead has led to the neglect of important research questions.