scispace - formally typeset
Search or ask a question

Showing papers by "University of Alabama published in 2018"


Journal ArticleDOI
TL;DR: Cor conservation of resources (COR) theory has become one of the most widely cited theories in organizational psychology and organizational behavior and has been adopted across the many areas of the stress spectrum, from burnout to traumatic stress.
Abstract: Over the past 30 years, conservation of resources (COR) theory has become one of the most widely cited theories in organizational psychology and organizational behavior. COR theory has been adopted across the many areas of the stress spectrum, from burnout to traumatic stress. Further attesting to the theory's centrality, COR theory is largely the basis for the more work-specific leading theory of organizational stress, namely the job demands-resources model. One of the major advantages of COR theory is its ability to make a wide range of specific hypotheses that are much broader than those offered by theories that focus on a single central resource, such as control, or that speak about resources in general. In this article, we will revisit the principles and corollaries of COR theory that inform those more specific hypotheses and will review research in organizational behavior that has relied on the theory.

1,852 citations


Journal ArticleDOI
Bela Abolfathi1, D. S. Aguado2, Gabriela Aguilar3, Carlos Allende Prieto2  +361 moreInstitutions (94)
TL;DR: SDSS-IV is the fourth generation of the Sloan Digital Sky Survey and has been in operation since 2014 July. as discussed by the authors describes the second data release from this phase, and the 14th from SDSS overall (making this Data Release Fourteen or DR14).
Abstract: The fourth generation of the Sloan Digital Sky Survey (SDSS-IV) has been in operation since 2014 July. This paper describes the second data release from this phase, and the 14th from SDSS overall (making this Data Release Fourteen or DR14). This release makes the data taken by SDSS-IV in its first two years of operation (2014-2016 July) public. Like all previous SDSS releases, DR14 is cumulative, including the most recent reductions and calibrations of all data taken by SDSS since the first phase began operations in 2000. New in DR14 is the first public release of data from the extended Baryon Oscillation Spectroscopic Survey; the first data from the second phase of the Apache Point Observatory (APO) Galactic Evolution Experiment (APOGEE-2), including stellar parameter estimates from an innovative data-driven machine-learning algorithm known as "The Cannon"; and almost twice as many data cubes from the Mapping Nearby Galaxies at APO (MaNGA) survey as were in the previous release (N = 2812 in total). This paper describes the location and format of the publicly available data from the SDSS-IV surveys. We provide references to the important technical papers describing how these data have been taken (both targeting and observation details) and processed for scientific use. The SDSS web site (www.sdss.org) has been updated for this release and provides links to data downloads, as well as tutorials and examples of data use. SDSS-IV is planning to continue to collect astronomical data until 2020 and will be followed by SDSS-V.

965 citations


Journal ArticleDOI
TL;DR: This review article aims to present some short summaries written by distinguished researchers in the field of fractional calculus that will guide young researchers and help newcomers to see some of the main real-world applications and gain an understanding of this powerful mathematical tool.

922 citations


Journal ArticleDOI
TL;DR: The Banff ABMR criteria are updated and paves the way for the Banff scheme to be part of an integrative approach for defining surrogate endpoints in next‐generation clinical trials.

768 citations


Journal ArticleDOI
TL;DR: A comprehensive survey of the applications of DL algorithms for different network layers, including physical layer modulation/coding, data link layer access control/resource allocation, and routing layer path search, and traffic balancing is performed.
Abstract: As a promising machine learning tool to handle the accurate pattern recognition from complex raw data, deep learning (DL) is becoming a powerful method to add intelligence to wireless networks with large-scale topology and complex radio conditions. DL uses many neural network layers to achieve a brain-like acute feature extraction from high-dimensional raw data. It can be used to find the network dynamics (such as hotspots, interference distribution, congestion points, traffic bottlenecks, spectrum availability, etc.) based on the analysis of a large amount of network parameters (such as delay, loss rate, link signal-to-noise ratio, etc.). Therefore, DL can analyze extremely complex wireless networks with many nodes and dynamic link quality. This paper performs a comprehensive survey of the applications of DL algorithms for different network layers, including physical layer modulation/coding, data link layer access control/resource allocation, and routing layer path search, and traffic balancing. The use of DL to enhance other network functions, such as network security, sensing data compression, etc., is also discussed. Moreover, the challenging unsolved research issues in this field are discussed in detail, which represent the future research trends of DL-based wireless networks. This paper can help the readers to deeply understand the state-of-the-art of the DL-based wireless network designs, and select interesting unsolved issues to pursue in their research.

580 citations


Journal ArticleDOI
TL;DR: The MiniBooNE data are consistent in energy and magnitude with the excess of events reported by the Liquid Scintillator Neutrino Detector (LSND), and the significance of the combined LSND and Mini BooNE excesses is 6.0σ.
Abstract: The MiniBooNE experiment at Fermilab reports results from an analysis of ν_{e} appearance data from 12.84×10^{20} protons on target in neutrino mode, an increase of approximately a factor of 2 over previously reported results. A ν_{e} charged-current quasielastic event excess of 381.2±85.2 events (4.5σ) is observed in the energy range 200

482 citations


Journal ArticleDOI
Albert M. Sirunyan, Armen Tumasyan, Wolfgang Adam1, Federico Ambrogi1  +2238 moreInstitutions (159)
TL;DR: In this paper, the discriminating variables and the algorithms used for heavy-flavour jet identification during the first years of operation of the CMS experiment in proton-proton collisions at a centre-of-mass energy of 13 TeV, are presented.
Abstract: Many measurements and searches for physics beyond the standard model at the LHC rely on the efficient identification of heavy-flavour jets, i.e. jets originating from bottom or charm quarks. In this paper, the discriminating variables and the algorithms used for heavy-flavour jet identification during the first years of operation of the CMS experiment in proton-proton collisions at a centre-of-mass energy of 13 TeV, are presented. Heavy-flavour jet identification algorithms have been improved compared to those used previously at centre-of-mass energies of 7 and 8 TeV. For jets with transverse momenta in the range expected in simulated events, these new developments result in an efficiency of 68% for the correct identification of a b jet for a probability of 1% of misidentifying a light-flavour jet. The improvement in relative efficiency at this misidentification probability is about 15%, compared to previous CMS algorithms. In addition, for the first time algorithms have been developed to identify jets containing two b hadrons in Lorentz-boosted event topologies, as well as to tag c jets. The large data sample recorded in 2016 at a centre-of-mass energy of 13 TeV has also allowed the development of new methods to measure the efficiency and misidentification probability of heavy-flavour jet identification algorithms. The b jet identification efficiency is measured with a precision of a few per cent at moderate jet transverse momenta (between 30 and 300 GeV) and about 5% at the highest jet transverse momenta (between 500 and 1000 GeV).

454 citations


Journal ArticleDOI
TL;DR: In this paper, the authors summarized the recent advances, challenges, and prospects of both fundamental and applied aspects of stress in thin films and engineering coatings and systems, based on recent achievements presented during the 2016 Stress Workshop entitled “Stress Evolution in Thin Films and Coatings: from Fundamental Understanding to Control.
Abstract: The issue of stress in thin films and functional coatings is a persistent problem in materials science and technology that has congregated many efforts, both from experimental and fundamental points of view, to get a better understanding on how to deal with, how to tailor, and how to manage stress in many areas of applications. With the miniaturization of device components, the quest for increasingly complex film architectures and multiphase systems and the continuous demands for enhanced performance, there is a need toward the reliable assessment of stress on a submicron scale from spatially resolved techniques. Also, the stress evolution during film and coating synthesis using physical vapor deposition (PVD), chemical vapor deposition, plasma enhanced chemical vapor deposition (PECVD), and related processes is the result of many interrelated factors and competing stress sources so that the task to provide a unified picture and a comprehensive model from the vast amount of stress data remains very challenging. This article summarizes the recent advances, challenges, and prospects of both fundamental and applied aspects of stress in thin films and engineering coatings and systems, based on recent achievements presented during the 2016 Stress Workshop entitled “Stress Evolution in Thin Films and Coatings: from Fundamental Understanding to Control.” Evaluation methods, implying wafer curvature, x-ray diffraction, or focused ion beam removal techniques, are reviewed. Selected examples of stress evolution in elemental and alloyed systems, graded layers, and multilayer-stacks as well as amorphous films deposited using a variety of PVD and PECVD techniques are highlighted. Based on mechanisms uncovered by in situ and real-time diagnostics, a kinetic model is outlined that is capable of reproducing the dependence of intrinsic (growth) stress on the grain size, growth rate, and deposited energy. The problems and solutions related to stress in the context of optical coatings, inorganic coatings on plastic substrates, and tribological coatings for aerospace applications are critically examined. This review also suggests strategies to mitigate excessive stress levels from novel coating synthesis perspectives to microstructural design approaches, including the ability to empower crack-based fabrication processes, pathways leading to stress relaxation and compensation, as well as management of the film and coating growth conditions with respect to energetic ion bombardment. Future opportunities and challenges for stress engineering and stress modeling are considered and outlined.

448 citations


Journal ArticleDOI
01 Jan 2018-Carbon
TL;DR: In this paper, the authors integrate a monolayer graphene into metal-based terahertz (THz) metamaterials, and realize a complete modulation in the resonance strength of the EIT analogue via manipulating the Fermi level of graphene.

359 citations


Journal ArticleDOI
TL;DR: In this article, the authors comprehensively assessed the current research status on residual stress sources, characteristics, and mitigation in metal additive manufacturing and highlighted the relationship between residual stress and microstructure.

337 citations


Journal ArticleDOI
TL;DR: In this paper, the performance of the modified system is studied using proton-proton collision data at center-of-mass energy √s=13 TeV, collected at the LHC in 2015 and 2016.
Abstract: The CMS muon detector system, muon reconstruction software, and high-level trigger underwent significant changes in 2013–2014 in preparation for running at higher LHC collision energy and instantaneous luminosity. The performance of the modified system is studied using proton-proton collision data at center-of-mass energy √s=13 TeV, collected at the LHC in 2015 and 2016. The measured performance parameters, including spatial resolution, efficiency, and timing, are found to meet all design specifications and are well reproduced by simulation. Despite the more challenging running conditions, the modified muon system is found to perform as well as, and in many aspects better than, previously. We dedicate this paper to the memory of Prof. Alberto Benvenuti, whose work was fundamental for the CMS muon detector.

Journal ArticleDOI
TL;DR: In this paper, a review of phase change materials used to optimize building envelope and equipment is provided, and the existing gaps in the research works on energy performance improvement with phase change material are identified, and recommendations offered as authors' viewpoints in 5 aspects.

Journal ArticleDOI
TL;DR: A three-layer, deep convolutional autoencoder (CAE) is proposed, which utilizes unsupervised pretraining to initialize the weights in the subsequent Convolutional layers, and is shown to be more effective than other deep learning architectures.
Abstract: Radar-based activity recognition is a problem that has been of great interest due to applications such as border control and security, pedestrian identification for automotive safety, and remote health monitoring. This paper seeks to show the efficacy of micro-Doppler analysis to distinguish even those gaits whose micro-Doppler signatures are not visually distinguishable. Moreover, a three-layer, deep convolutional autoencoder (CAE) is proposed, which utilizes unsupervised pretraining to initialize the weights in the subsequent convolutional layers. This architecture is shown to be more effective than other deep learning architectures, such as convolutional neural networks and autoencoders, as well as conventional classifiers employing predefined features, such as support vector machines (SVM), random forest, and extreme gradient boosting. Results show the performance of the proposed deep CAE yields a correct classification rate of 94.2% for micro-Doppler signatures of 12 different human activities measured indoors using a 4 GHz continuous wave radar—17.3% improvement over SVM.

Journal ArticleDOI
TL;DR: The review indicates that the existing wearable technologies applied in other industrial sectors can be used to monitor and measure a wide variety of safety performance metrics within the construction industry.


Journal ArticleDOI
Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam, Federico Ambrogi  +2240 moreInstitutions (157)
TL;DR: In this article, a measurement of the H→ττ signal strength is performed using events recorded in proton-proton collisions by the CMS experiment at the LHC in 2016 at a center-of-mass energy of 13TeV.

Journal ArticleDOI
TL;DR: A high prevalence of poor sleep among college students, some sex differences, and distinct patterns of mental health symptoms in relation to sleep problems are documents.

Journal ArticleDOI
TL;DR: Much research supports the view that greater left than right frontal cortical activity is associated with greater positively or negatively valenced approach motivation, and the need to consider motivational direction as separate from affective valence in conceptual models of emotional space is illustrated.
Abstract: We review conceptual arguments and research on the role of asymmetric frontal cortical activity in emotional and motivational processes. The current article organizes and reviews research on asymmetrical frontal cortical activity by focusing on research that has measured trait (baseline) frontal asymmetry and related it to other individual differences measures related to motivation (e.g., anger, bipolar disorder). The review also covers research that has measured state frontal asymmetry in response to situational manipulations of motivation and emotion and as an intervening variable in motivation-cognition interactions. This review concludes that much research supports the view that greater left than right frontal cortical activity is associated with greater positively or negatively valenced approach motivation. The view that greater right than left frontal cortical activity is associated with withdrawal motivation, although supported, has received less empirical attention. In addition to reviewing research on the emotive functions of asymmetric frontal cortical activity, the reviewed research illustrates the need to consider motivational direction as separate from affective valence in conceptual models of emotional space.

Journal ArticleDOI
TL;DR: In this paper, a comprehensive analysis of TXS-0506+056 during its flaring state, using newly collected Swift, NuSTAR, and X-shooter data with Fermi observations and numerical models to constrain the blazar's particle acceleration processes and multimessenger (electromagnetic and high-energy neutrino) emissions.
Abstract: Detection of the IceCube-170922A neutrino coincident with the flaring blazar TXS 0506+056, the first and only ∼3σ high-energy neutrino source association to date, offers a potential breakthrough in our understanding of high-energy cosmic particles and blazar physics. We present a comprehensive analysis of TXS 0506+056 during its flaring state, using newly collected Swift, NuSTAR, and X-shooter data with Fermi observations and numerical models to constrain the blazar’s particle acceleration processes and multimessenger (electromagnetic (EM) and high-energy neutrino) emissions. Accounting properly for EM cascades in the emission region, we find a physically consistent picture only within a hybrid leptonic scenario, with γ-rays produced by external inverse-Compton processes and high-energy neutrinos via a radiatively subdominant hadronic component. We derive robust constraints on the blazar’s neutrino and cosmic-ray emissions and demonstrate that, because of cascade effects, the 0.1–100 keV emissions of TXS 0506+056 serve as a better probe of its hadronic acceleration and high-energy neutrino production processes than its GeV–TeV emissions. If the IceCube neutrino association holds, physical conditions in the TXS 0506+056 jet must be close to optimal for high-energy neutrino production, and are not favorable for ultrahigh-energy cosmic-ray acceleration. Alternatively, the challenges we identify in generating a significant rate of IceCube neutrino detections from TXS 0506+056 may disfavor single-zone models, in which γ-rays and high-energy neutrinos are produced in a single emission region. In concert with continued operations of the high-energy neutrino observatories, we advocate regular X-ray monitoring of TXS 0506+056 and other blazars in order to test single-zone blazar emission models, clarify the nature and extent of their hadronic acceleration processes, and carry out the most sensitive possible search for additional multimessenger sources.

Journal ArticleDOI
TL;DR: These are the first direct limits for N mass above 500 GeV and the first limits obtained at a hadron collider for N masses below 40 Ge V.
Abstract: A search for a heavy neutral lepton N of Majorana nature decaying into a W boson and a charged lepton is performed using the CMS detector at the LHC. The targeted signature consists of three prompt charged leptons in any flavor combination of electrons and muons. The data were collected in proton-proton collisions at a center-of-mass energy of 13 TeV, with an integrated luminosity of 35.9 fb^(−1). The search is performed in the N mass range between 1 GeV and 1.2 TeV. The data are found to be consistent with the expected standard model background. Upper limits are set on the values of |V_(eN)|^2and |V_(μN)|^2, where V_(lN) is the matrix element describing the mixing of N with the standard model neutrino of flavor l. These are the first direct limits for N masses above 500 GeV and the first limits obtained at a hadron collider for N masses below 40 GeV.

Journal ArticleDOI
TL;DR: Results from a search for neutrinoless double-beta decay (0ν ββ) of ^{136}Xe are presented using the first year of data taken with the upgraded EXO-200 detector, with no statistically significant evidence for 0νββ observed.
Abstract: Results from a search for neutrinoless double-beta decay ( 0νββ) of ^(136)Xe are presented using the first year of data taken with the upgraded EXO-200 detector. Relative to previous searches by EXO-200, the energy resolution of the detector has been improved to σ/E = 1.23%, the electric field in the drift region has been raised by 50%, and a system to suppress radon in the volume between the cryostat and lead shielding has been implemented. In addition, analysis techniques that improve topological discrimination between 0νββ and background events have been developed. Incorporating these hardware and analysis improvements, the median 90% confidence level 0νββ half-life sensitivity after combining with the full data set acquired before the upgrade has increased twofold to 3.7 × 10^(25) yr. No statistically significant evidence for 0νββ is observed, leading to a lower limit on the 0νββ half-life of 1.8 × 10^(25) yr at the 90% confidence level.

Journal ArticleDOI
28 Sep 2018-Science
TL;DR: The findings indicate that this Lowland Maya society was a regionally interconnected network of densely populated and defended cities, which were sustained by an array of agricultural practices that optimized land productivity and the interactions between rural and urban communities.
Abstract: INTRODUCTION Lowland Maya civilization flourished from 1000 BCE to 1500 CE in and around the Yucatan Peninsula. Known for its sophistication in writing, art, architecture, astronomy, and mathematics, this civilization is still obscured by inaccessible forest, and many questions remain about its makeup. In 2016, the Pacunam Lidar Initiative (PLI) undertook the largest lidar survey to date of the Maya region, mapping 2144 km 2 of the Maya Biosphere Reserve in Guatemala. The PLI data have made it possible to characterize ancient settlement and infrastructure over an extensive, varied, and representative swath of the central Maya Lowlands. RATIONALE Scholars first applied modern lidar technology to the lowland Maya area in 2009, focusing analysis on the immediate surroundings of individual sites. The PLI covers twice the area of any previous survey and involves a consortium of scholars conducting collaborative and complementary analyses of the entire survey region. This cooperation among scholars has provided a unique regional perspective revealing substantial ancient population as well as complex previously unrecognized landscape modifications at a grand scale throughout the central lowlands in the Yucatan peninsula. RESULTS Analysis identified 61,480 ancient structures in the survey region, resulting in a density of 29 structures/km 2 . Controlling for a number of complex variables, we estimate an average density of ~80 to 120 persons/km 2 at the height of the Late Classic period (650 to 800 CE). Extrapolation of this settlement density to the entire 95,000 km 2 of the central lowlands produces a population range of 7 million to 11 million. Settlement distribution is not homogeneous, however; we found evidence of (i) rural areas with low overall density, (ii) periurban zones with small urban centers and dispersed populations, and (iii) urban zones where a single, large city integrated a wider population. The PLI survey revealed a landscape heavily modified for intensive agriculture, necessary to sustain populations on this scale. Lidar shows field systems in the low-lying wetlands and terraces in the upland areas. The scale of wetland systems and their association with dense populations suggest centralized planning, whereas upland terraces cluster around residences, implying local management. Analysis identified 362 km 2 of deliberately modified agricultural terrain and another 952 km 2 of unmodified uplands for potential swidden use. Approximately 106 km of causeways within and between sites constitute evidence of inter- and intracommunity connectivity. In contrast, sizable defensive features point to societal disconnection and large-scale conflict. CONCLUSION The 2144 km 2 of lidar data acquired by the PLI alter interpretations of the ancient Maya at a regional scale. An ancient population in the millions was unevenly distributed across the central lowlands, with varying degrees of urbanization. Agricultural systems found in lidar indicate how these populations were supported, although an irregular distribution suggests the existence of a regional agricultural economy of great complexity. Substantial infrastructural investment in integrative features (causeways) and conflictive features (defensive systems) highlights the interconnectivity of the ancient lowland Maya landscape. These perspectives on the ancient Maya generate new questions, refine targets for fieldwork, elicit regional study across continuous landscapes, and advance Maya archaeology into a bold era of research and exploration.

Journal ArticleDOI
TL;DR: In this article, a search for hypervelocity runaway double white dwarf (WD) binaries undergoing unstable mass transfer has been performed using Gaia's second data release, followed up with ground-based instruments.
Abstract: Double detonations in double white dwarf (WD) binaries undergoing unstable mass transfer have emerged in recent years as one of the most promising Type Ia supernova (SN Ia) progenitor scenarios. One potential outcome of this "dynamically driven double-degenerate double-detonation" (D6) scenario is that the companion WD survives the explosion and is flung away with a velocity equal to its >1000 km s−1 pre-SN orbital velocity. We perform a search for these hypervelocity runaway WDs using Gaia's second data release. In this paper, we discuss seven candidates followed up with ground-based instruments. Three sources are likely to be some of the fastest known stars in the Milky Way, with total Galactocentric velocities between 1000 and 3000 km s−1, and are consistent with having previously been companion WDs in pre-SN Ia systems. However, although the radial velocity of one of the stars is >1000 km s−1, the radial velocities of the other two stars are puzzlingly consistent with 0. The combined five-parameter astrometric solutions from Gaia and radial velocities from follow-up spectra yield tentative 6D confirmation of the D6 scenario. The past position of one of these stars places it within a faint, old SN remnant, further strengthening the interpretation of these candidates as hypervelocity runaways from binary systems that underwent SNe Ia.

Journal ArticleDOI
TL;DR: In this article, the authors revisited simulations of naked C/O WD detonations and found that a median-brightness SN Ia is produced by the detonation of a 1.0 Msol WD instead of a more massive and rarer 1.1Msol WD.
Abstract: The detonation of a sub-Chandrasekhar-mass white dwarf (WD) has emerged as one of the most promising Type Ia supernova (SN Ia) progenitor scenarios. Recent studies have suggested that the rapid transfer of a very small amount of helium from one WD to another is sufficient to ignite a helium shell detonation that subsequently triggers a carbon core detonation, yielding a "dynamically-driven double degenerate double detonation" SN Ia. Because the helium shell that surrounds the core explosion is so minimal, this scenario approaches the limiting case of a bare C/O WD detonation. Motivated by discrepancies in previous literature and by a recent need for detailed nucleosynthetic data, we revisit simulations of naked C/O WD detonations in this paper. We disagree to some extent with the nucleosynthetic results of previous work on sub-Chandrasekhar-mass bare C/O WD detonations; e.g., we find that a median-brightness SN Ia is produced by the detonation of a 1.0 Msol WD instead of a more massive and rarer 1.1 Msol WD. The neutron-rich nucleosynthesis in our simulations agrees broadly with some observational constraints, although tensions remain with others. There are also discrepancies related to the velocities of the outer ejecta and light curve shapes, but overall our synthetic light curves and spectra are roughly consistent with observations. We are hopeful that future multi-dimensional simulations will resolve these issues and further bolster the dynamically-driven double degenerate double detonation scenario's potential to explain most SNe Ia.

Journal ArticleDOI
TL;DR: In this article, the authors explored consumers' intention to choose organic menu items at restaurants and their intention to visit restaurants featuring organic items using the theory of planned behavior and the norm activation model.

Journal ArticleDOI
Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam, Federico Ambrogi  +2357 moreInstitutions (197)
TL;DR: In this article, a low-mass search for resonances decaying into pairs of jets is performed using proton-proton collision data collected at s√=13 TeV corresponding to an integrated luminosity of up to 36 fb−1.
Abstract: Searches for resonances decaying into pairs of jets are performed using proton-proton collision data collected at s√=13 TeV corresponding to an integrated luminosity of up to 36 fb−1. A low-mass search, for resonances with masses between 0.6 and 1.6 TeV, is performed based on events with dijets reconstructed at the trigger level from calorimeter information. A high-mass search, for resonances with masses above 1.6 TeV, is performed using dijets reconstructed offline with a particle-flow algorithm. The dijet mass spectrum is well described by a smooth parameterization and no evidence for the production of new particles is observed. Upper limits at 95% confidence level are reported on the production cross section for narrow resonances with masses above 0.6 TeV. In the context of specific models, the limits exclude string resonances with masses below 7.7 TeV, scalar diquarks below 7.2 TeV, axigluons and colorons below 6.1 TeV, excited quarks below 6.0 TeV, color-octet scalars below 3.4 TeV, W′ bosons below 3.3 TeV, Z′ bosons below 2.7 TeV, Randall-Sundrum gravitons below 1.8 TeV and in the range 1.9 to 2.5 TeV, and dark matter mediators below 2.6 TeV. The limits on both vector and axial-vector mediators, in a simplified model of interactions between quarks and dark matter particles, are presented as functions of dark matter particle mass and coupling to quarks. Searches are also presented for broad resonances, including for the first time spin-1 resonances with intrinsic widths as large as 30% of the resonance mass. The broad resonance search improves and extends the exclusions of a dark matter mediator to larger values of its mass and coupling to quarks.

Journal ArticleDOI
04 Apr 2018-Nature
TL;DR: It is concluded that the interaction of human alterations to the Mississippi River system with dynamical modes of climate variability has elevated the current flood hazard to levels that are unprecedented within the past five centuries.
Abstract: A suite of river discharge, tree-ring, sedimentary and climate data shows that the Mississippi’s flood magnitude has risen by about twenty per cent over the past half-century, largely owing to engineering works. Instrumental records of river discharge do not go far enough back in time to place recent flood activity in a longer-term context, making it difficult to understand how climate variability and human activity might have affected flooding. Now, Samuel Munoz and colleagues reconstruct the past flood frequency of the Mississippi River from a compilation of river-discharge, tree-ring, sedimentary and climate data. The results show that the magnitude of the 100-year flood has gone up by about 20 per cent over the past 500 years. Climate cycles account for most of the variability in flooding on multidecadal timescales, but engineering works account for about three-quarters of the long-term increase. Over the past century, many of the world’s major rivers have been modified for the purposes of flood mitigation, power generation and commercial navigation1. Engineering modifications to the Mississippi River system have altered the river’s sediment levels and channel morphology2, but the influence of these modifications on flood hazard is debated3,4,5. Detecting and attributing changes in river discharge is challenging because instrumental streamflow records are often too short to evaluate the range of natural hydrological variability before the establishment of flood mitigation infrastructure. Here we show that multi-decadal trends of flood hazard on the lower Mississippi River are strongly modulated by dynamical modes of climate variability, particularly the El Nino–Southern Oscillation and the Atlantic Multidecadal Oscillation, but that the artificial channelization (confinement to a straightened channel) has greatly amplified flood magnitudes over the past century. Our results, based on a multi-proxy reconstruction of flood frequency and magnitude spanning the past 500 years, reveal that the magnitude of the 100-year flood (a flood with a 1 per cent chance of being exceeded in any year) has increased by 20 per cent over those five centuries, with about 75 per cent of this increase attributed to river engineering. We conclude that the interaction of human alterations to the Mississippi River system with dynamical modes of climate variability has elevated the current flood hazard to levels that are unprecedented within the past five centuries.

Proceedings ArticleDOI
01 Dec 2018
TL;DR: Evaluation results confirm that the new adaptive solution can significantly improve the rate distortion for the lossy compression with fairly high compression ratios.
Abstract: Today’s scientific simulations require a significant reduction of the data size because of extremely large volumes of data they produce and the limitation of storage bandwidth and space. If the compression is set to reach a high compression ratio, however, the reconstructed data are often distorted too much to tolerate. In this paper, we explore a new compression strategy that can effectively control the data distortion when significantly reducing the data size. The contribution is threefold. (1) We propose an adaptive compression framework to select either our improved Lorenzo prediction method or our optimized linear regression method dynamically in different regions of the dataset. (2) We explore how to select them accurately based on the data features in each block to obtain the best compression quality. (3) We analyze the effectiveness of our solution in details using four real-world scientific datasets with 100+ fields. Evaluation results confirm that our new adaptive solution can significantly improve the rate distortion for the lossy compression with fairly high compression ratios. The compression ratio of our compressor is 1.5X~8X as high as that of two other leading lossy compressors (SZ and ZFP) with the same peak single-to-noise ratio (PSNR), in the high-compression cases. Parallel experiments with 8,192 cores and 24 TB of data shows that our solution obtains 1.86X dumping performance and 1.95X loading performance compared with the second-best lossy compressor, respectively.

Journal ArticleDOI
05 Dec 2018-Polymer
TL;DR: In this paper, a joint co-coagulation technique and hot pressing of carbon black (CB)/thermoplastic polyurethane (TPU) nanocomposites with a range of nanoparticle loadings was used.

Journal ArticleDOI
Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam, Federico Ambrogi  +2314 moreInstitutions (196)
TL;DR: A statistical combination of several searches for the electroweak production of charginos and neutralinos is presented in this article, where a targeted analysis requiring three or more charged leptons (electrons or muons) is presented, focusing on the challenging scenario in which the difference in mass between the two least massive neutralino is approximately equal to the mass of the Z boson.
Abstract: A statistical combination of several searches for the electroweak production of charginos and neutralinos is presented. All searches use proton-proton collision data at $ \sqrt{s}=13 $ TeV, recorded with the CMS detector at the LHC in 2016 and corresponding to an integrated luminosity of 35.9 fb$^{−1}$. In addition to the combination of previous searches, a targeted analysis requiring three or more charged leptons (electrons or muons) is presented, focusing on the challenging scenario in which the difference in mass between the two least massive neutralinos is approximately equal to the mass of the Z boson. The results are interpreted in simplified models of chargino-neutralino or neutralino pair production. For chargino-neutralino production, in the case when the lightest neutralino is massless, the combination yields an observed (expected) limit at the 95% confidence level on the chargino mass of up to 650 (570) GeV, improving upon the individual analysis limits by up to 40 GeV. If the mass difference between the two least massive neutralinos is approximately equal to the mass of the Z boson in the chargino-neutralino model, the targeted search requiring three or more leptons obtains observed and expected exclusion limits of around 225 GeV on the second neutralino mass and 125 GeV on the lightest neutralino mass, improving the observed limit by about 60 GeV in both masses compared to the previous CMS result. In the neutralino pair production model, the combined observed (expected) exclusion limit on the neutralino mass extends up to 650–750 (550–750) GeV, depending on the branching fraction assumed. This extends the observed exclusion achieved in the individual analyses by up to 200 GeV. The combined result additionally excludes some intermediate gaps in the mass coverage of the individual analyses.