Showing papers by "University of Notre Dame published in 2016"
••
University of California, San Diego1, University of Montana2, Stanford University3, Scripps Institution of Oceanography4, National Autonomous University of Mexico5, Salk Institute for Biological Studies6, San Diego State University7, Strathclyde Institute of Pharmacy and Biomedical Sciences8, Lawrence Berkeley National Laboratory9, Harvard University10, University of Rennes11, University of Minnesota12, University of Lorraine13, Technical University of Denmark14, J. Craig Venter Institute15, University of California, Los Angeles16, University of Washington17, ETH Zurich18, University of Illinois at Chicago19, National Sun Yat-sen University20, Academia Sinica21, University of Münster22, Victoria University of Wellington23, University of North Carolina at Chapel Hill24, Indiana University25, Smithsonian Tropical Research Institute26, Federal University of Mato Grosso do Sul27, University of São Paulo28, University of Notre Dame29, University of California, Santa Cruz30, Oregon State University31, University of California, Berkeley32, Florida International University33, University of Hawaii at Manoa34, University of Geneva35, Institut de Chimie des Substances Naturelles36, Pacific Northwest National Laboratory37, National Institutes of Health38, Chinese Academy of Sciences39
TL;DR: In GNPS, crowdsourced curation of freely available community-wide reference MS libraries will underpin improved annotations and data-driven social-networking should facilitate identification of spectra and foster collaborations.
Abstract: The potential of the diverse chemistries present in natural products (NP) for biotechnology and medicine remains untapped because NP databases are not searchable with raw data and the NP community has no way to share data other than in published papers. Although mass spectrometry (MS) techniques are well-suited to high-throughput characterization of NP, there is a pressing need for an infrastructure to enable sharing and curation of data. We present Global Natural Products Social Molecular Networking (GNPS; http://gnps.ucsd.edu), an open-access knowledge base for community-wide organization and sharing of raw, processed or identified tandem mass (MS/MS) spectrometry data. In GNPS, crowdsourced curation of freely available community-wide reference MS libraries will underpin improved annotations. Data-driven social-networking should facilitate identification of spectra and foster collaborations. We also introduce the concept of 'living data' through continuous reanalysis of deposited data.
2,365 citations
••
University of Manchester1, KEK2, CERN3, Complutense University of Madrid4, SLAC National Accelerator Laboratory5, Toyama College6, Lebedev Physical Institute7, Fermilab8, University of Paris-Sud9, Lawrence Livermore National Laboratory10, National Research Nuclear University MEPhI11, Queen's University Belfast12, Korea Institute of Science and Technology Information13, Istituto Nazionale di Fisica Nucleare14, Northeastern University15, University of Seville16, National University of Cordoba17, Saint Joseph University18, Joint Institute for Nuclear Research19, Illawarra Health & Medical Research Institute20, University of Wollongong21, Hampton University22, TRIUMF23, ETH Zurich24, Centre national de la recherche scientifique25, University of Bordeaux26, University of Helsinki27, National Technical University of Athens28, Johns Hopkins University School of Medicine29, University of Notre Dame30, Ashikaga Institute of Technology31, Kobe University32, Intelligence and National Security Alliance33, University of Trieste34, University of Warwick35, University of Belgrade36, Instituto Superior Técnico37, European Space Agency38, Varian Medical Systems39, George Washington University40, Ritsumeikan University41, Ton Duc Thang University42, Université Paris-Saclay43, Idaho State University44, Naruto University of Education45
01 Nov 2016-Nuclear Instruments & Methods in Physics Research Section A-accelerators Spectrometers Detectors and Associated Equipment
TL;DR: Geant4 as discussed by the authors is a software toolkit for the simulation of the passage of particles through matter, which is used by a large number of experiments and projects in a variety of application domains, including high energy physics, astrophysics and space science, medical physics and radiation protection.
Abstract: Geant4 is a software toolkit for the simulation of the passage of particles through matter. It is used by a large number of experiments and projects in a variety of application domains, including high energy physics, astrophysics and space science, medical physics and radiation protection. Over the past several years, major changes have been made to the toolkit in order to accommodate the needs of these user communities, and to efficiently exploit the growth of computing power made available by advances in technology. The adaptation of Geant4 to multithreading, advances in physics, detector modeling and visualization, extensions to the toolkit, including biasing and reverse Monte Carlo, and tools for physics and release validation are discussed here.
2,260 citations
••
TL;DR: This review is a critical account of the interrelation between MHP electronic structure, absorption, emission, carrier dynamics and transport, and other relevant photophysical processes that have propelled these materials to the forefront of modern optoelectronics research.
Abstract: A new chapter in the long and distinguished history of perovskites is being written with the breakthrough success of metal halide perovskites (MHPs) as solution-processed photovoltaic (PV) absorbers. The current surge in MHP research has largely arisen out of their rapid progress in PV devices; however, these materials are potentially suitable for a diverse array of optoelectronic applications. Like oxide perovskites, MHPs have ABX3 stoichiometry, where A and B are cations and X is a halide anion. Here, the underlying physical and photophysical properties of inorganic (A = inorganic) and hybrid organic–inorganic (A = organic) MHPs are reviewed with an eye toward their potential application in emerging optoelectronic technologies. Significant attention is given to the prototypical compound methylammonium lead iodide (CH3NH3PbI3) due to the preponderance of experimental and theoretical studies surrounding this material. We also discuss other salient MHP systems, including 2-dimensional compounds, where rele...
1,125 citations
••
Vardan Khachatryan1, Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam +2283 more•Institutions (141)
TL;DR: Combined fits to CMS UE proton–proton data at 7TeV and to UEProton–antiproton data from the CDF experiment at lower s, are used to study the UE models and constrain their parameters, providing thereby improved predictions for proton-proton collisions at 13.
Abstract: New sets of parameters ("tunes") for the underlying-event (UE) modeling of the PYTHIA8, PYTHIA6 and HERWIG++ Monte Carlo event generators are constructed using different parton distribution functions. Combined fits to CMS UE data at sqrt(s) = 7 TeV and to UE data from the CDF experiment at lower sqrt(s), are used to study the UE models and constrain their parameters, providing thereby improved predictions for proton-proton collisions at 13 TeV. In addition, it is investigated whether the values of the parameters obtained from fits to UE observables are consistent with the values determined from fitting observables sensitive to double-parton scattering processes. Finally, comparisons of the UE tunes to "minimum bias" (MB) events, multijet, and Drell-Yan (q q-bar to Z / gamma* to lepton-antilepton + jets) observables at 7 and 8 TeV are presented, as well as predictions of MB and UE observables at 13 TeV.
686 citations
••
TL;DR: This work outlines a framework for understanding the ecology of eDNA, including the origin, state, transport, and fate of extraorganismal genetic material, and identifies frontiers of conservation-focused eDNA application where it sees the most potential for growth.
Abstract: Environmental DNA (eDNA) refers to the genetic material that can be extracted from bulk environmental samples such as soil, water, and even air. The rapidly expanding study of eDNA has generated unprecedented ability to detect species and conduct genetic analyses for conservation, management, and research, particularly in scenarios where collection of whole organisms is impractical or impossible. While the number of studies demonstrating successful eDNA detection has increased rapidly in recent years, less research has explored the “ecology” of eDNA—myriad interactions between extraorganismal genetic material and its environment—and its influence on eDNA detection, quantification, analysis, and application to conservation and research. Here, we outline a framework for understanding the ecology of eDNA, including the origin, state, transport, and fate of extraorganismal genetic material. Using this framework, we review and synthesize the findings of eDNA studies from diverse environments, taxa, and fields of study to highlight important concepts and knowledge gaps in eDNA study and application. Additionally, we identify frontiers of conservation-focused eDNA application where we see the most potential for growth, including the use of eDNA for estimating population size, population genetic and genomic analyses via eDNA, inclusion of other indicator biomolecules such as environmental RNA or proteins, automated sample collection and analysis, and consideration of an expanded array of creative environmental samples. We discuss how a more complete understanding of the ecology of eDNA is integral to advancing these frontiers and maximizing the potential of future eDNA applications in conservation and research.
672 citations
••
Washington State University1, University of Notre Dame2, University of Toledo3, University of Copenhagen4, University of Wyoming5, United States Geological Survey6, Central Michigan University7, Engineer Research and Development Center8, University of Idaho9, Shimane University10, University of Grenoble11
TL;DR: A synthesis of knowledge is presented at this stage for application of this new and powerful detection method, which can reduce impacts on sensitive species and increase the power of field surveys for rare and elusive species.
Abstract: Summary
Species detection using environmental DNA (eDNA) has tremendous potential for contributing to the understanding of the ecology and conservation of aquatic species. Detecting species using eDNA methods, rather than directly sampling the organisms, can reduce impacts on sensitive species and increase the power of field surveys for rare and elusive species. The sensitivity of eDNA methods, however, requires a heightened awareness and attention to quality assurance and quality control protocols. Additionally, the interpretation of eDNA data demands careful consideration of multiple factors. As eDNA methods have grown in application, diverse approaches have been implemented to address these issues. With interest in eDNA continuing to expand, supportive guidelines for undertaking eDNA studies are greatly needed.
Environmental DNA researchers from around the world have collaborated to produce this set of guidelines and considerations for implementing eDNA methods to detect aquatic macroorganisms.
Critical considerations for study design include preventing contamination in the field and the laboratory, choosing appropriate sample analysis methods, validating assays, testing for sample inhibition and following minimum reporting guidelines. Critical considerations for inference include temporal and spatial processes, limits of correlation of eDNA with abundance, uncertainty of positive and negative results, and potential sources of allochthonous DNA.
We present a synthesis of knowledge at this stage for application of this new and powerful detection method.
634 citations
••
TL;DR: This survey describes the nuances of the method and, as users of textual analysis, some of the tripwires in implementation and reviews the contemporary textual analysis literature to highlight areas of future research.
Abstract: Relative to quantitative methods traditionally used in accounting and finance, textual analysis is substantially less precise Thus, understanding the art is of equal importance to understanding the science In this survey we describe the nuances of the method and, as users of textual analysis, some of the tripwires in implementation We also review the contemporary textual analysis literature and highlight areas of future research
567 citations
••
TL;DR: Experimental and theory together reveal that the Cu sites respond sensitively to exposure conditions, and in particular that Cu species are solvated and mobilized by NH3 under SCR conditions.
Abstract: The relationships among the macroscopic compositional parameters of a Cu-exchanged SSZ-13 zeolite catalyst, the types and numbers of Cu active sites, and activity for the selective catalytic reduction (SCR) of NOx with NH3 are established through experimental interrogation and computational analysis of materials across the catalyst composition space. Density functional theory, stochastic models, and experimental characterizations demonstrate that within the synthesis protocols applied here and across Si:Al ratios, the volumetric density of six-membered-rings (6MR) containing two Al (2Al sites) is consistent with a random Al siting in the SSZ-13 lattice subject to Lowenstein’s rule. Further, exchanged CuII ions first populate these 2Al sites before populating remaining unpaired, or 1Al, sites as CuIIOH. These sites are distinguished and enumerated ex situ through vibrational and X-ray absorption spectroscopies (XAS) and chemical titrations. In situ and operando XAS follow Cu oxidation state and coordinatio...
522 citations
••
TL;DR: The authors used both hypothetical examples and data from the 2012 European Social Survey to address the shortcomings of the ordered logit model and showed that Gologit/ppo models can be less restrictive than proportional odds models and more parsimonious than methods that ignore the ordering of categories altogether.
Abstract: When outcome variables are ordinal rather than continuous, the ordered logit model, aka the proportional odds model (ologit/po), is a popular analytical method. However, generalized ordered logit/partial proportional odds models (gologit/ppo) are often a superior alternative. Gologit/ppo models can be less restrictive than proportional odds models and more parsimonious than methods that ignore the ordering of categories altogether. However, the use of gologit/ppo models has itself been problematic or at least sub-optimal. Researchers typically note that such models fit better but fail to explain why the ordered logit model was inadequate or the substantive insights gained by using the gologit alternative. This paper uses both hypothetical examples and data from the 2012 European Social Survey to address these shortcomings.
511 citations
••
TL;DR: Many groups of closely related species have reticulate phylogenies, showing that some sexual compatibility may exist among them and can affect all parts of the tree of life, not just the recent species at the tips.
Abstract: Many groups of closely related species have reticulate phylogenies. Recent genomic analyses are showing this in many insects and vertebrates, as well as in microbes and plants. In microbes, lateral gene transfer is the dominant process that spoils strictly tree-like phylogenies, but in multicellular eukaryotes hybridization and introgression among related species is probably more important. Because many species, including the ancestors of ancient major lineages, seem to evolve rapidly in adaptive radiations, some sexual compatibility may exist among them. Introgression and reticulation can thereby affect all parts of the tree of life, not just the recent species at the tips. Our understanding of adaptive evolution, speciation, phylogenetics, and comparative biology must adapt to these mostly recent findings. Introgression has important practical implications as well, not least for the management of genetically modified organisms in pest and disease control.
436 citations
••
Purdue University1, University of Nevada, Reno2, Monsanto3, Old Dominion University4, North Carolina State University5, University College London6, Spanish National Research Council7, Oklahoma State University–Stillwater8, National Institutes of Health9, University of Cambridge10, Wellcome Trust11, J. Craig Venter Institute12, Leidos13, Broad Institute14, University of Nevada, Las Vegas15, University of Notre Dame16, University of Barcelona17, Carlos III Health Institute18, University of Massachusetts Medical School19, University of Connecticut20, University of Oxford21, University of Lausanne22, Virginia Tech23, West Virginia University24, Indiana University25, University of Maryland, Baltimore26, Texas A&M University27, Kansas State University28, University of Minnesota29, University of Manchester30, National University of Singapore31, University of California, San Francisco32, Iowa State University33, Colorado State University34, Pennsylvania State University35, Max Planck Society36, University of California, Riverside37, ANSES38, University of Santiago de Compostela39, Pompeu Fabra University40, California State Polytechnic University, Pomona41, University of Queensland42, University of the Sunshine Coast43, Swiss Institute of Bioinformatics44, University of Geneva45, University of Copenhagen46, University of Tennessee Health Science Center47, University of Vigo48, Wellcome Trust Sanger Institute49, University of Illinois at Urbana–Champaign50, Quinnipiac University51, International Livestock Research Institute52
TL;DR: Insights from genome analyses into parasitic processes unique to ticks, including host ‘questing', prolonged feeding, cuticle synthesis, blood meal concentration, novel methods of haemoglobin digestion, haem detoxification, vitellogenesis and prolonged off-host survival are reported.
Abstract: Ticks transmit more pathogens to humans and animals than any other arthropod. We describe the 2.1 Gbp nuclear genome of the tick, Ixodes scapularis (Say), which vectors pathogens that cause Lyme disease, human granulocytic anaplasmosis, babesiosis and other diseases. The large genome reflects accumulation of repetitive DNA, new lineages of retro-transposons, and gene architecture patterns resembling ancient metazoans rather than pancrustaceans. Annotation of scaffolds representing ∼57% of the genome, reveals 20,486 protein-coding genes and expansions of gene families associated with tick-host interactions. We report insights from genome analyses into parasitic processes unique to ticks, including host 'questing', prolonged feeding, cuticle synthesis, blood meal concentration, novel methods of haemoglobin digestion, haem detoxification, vitellogenesis and prolonged off-host survival. We identify proteins associated with the agent of human granulocytic anaplasmosis, an emerging disease, and the encephalitis-causing Langat virus, and a population structure correlated to life-history traits and transmission of the Lyme disease agent.
••
Wesley C. Van Voorhis1, John H. Adams2, Roberto Adelfio3, Roberto Adelfio4 +197 more•Institutions (62)
TL;DR: The results reveal the immense potential for translating the dispersed expertise in biological assays involving human pathogens into drug discovery starting points, by providing open access to new families of molecules, and emphasize how a small additional investment made to help acquire and distribute compounds, and sharing the data, can catalyze drug discovery for dozens of different indications.
Abstract: A major cause of the paucity of new starting points for drug discovery is the lack of interaction between academia and industry. Much of the global resource in biology is present in universities, whereas the focus of medicinal chemistry is still largely within industry. Open source drug discovery, with sharing of information, is clearly a first step towards overcoming this gap. But the interface could especially be bridged through a scale-up of open sharing of physical compounds, which would accelerate the finding of new starting points for drug discovery. The Medicines for Malaria Venture Malaria Box is a collection of over 400 compounds representing families of structures identified in phenotypic screens of pharmaceutical and academic libraries against the Plasmodium falciparum malaria parasite. The set has now been distributed to almost 200 research groups globally in the last two years, with the only stipulation that information from the screens is deposited in the public domain. This paper reports for the first time on 236 screens that have been carried out against the Malaria Box and compares these results with 55 assays that were previously published, in a format that allows a meta-analysis of the combined dataset. The combined biochemical and cellular assays presented here suggest mechanisms of action for 135 (34%) of the compounds active in killing multiple life-cycle stages of the malaria parasite, including asexual blood, liver, gametocyte, gametes and insect ookinete stages. In addition, many compounds demonstrated activity against other pathogens, showing hits in assays with 16 protozoa, 7 helminths, 9 bacterial and mycobacterial species, the dengue fever mosquito vector, and the NCI60 human cancer cell line panel of 60 human tumor cell lines. Toxicological, pharmacokinetic and metabolic properties were collected on all the compounds, assisting in the selection of the most promising candidates for murine proof-of-concept experiments and medicinal chemistry programs. The data for all of these assays are presented and analyzed to show how outstanding leads for many indications can be selected. These results reveal the immense potential for translating the dispersed expertise in biological assays involving human pathogens into drug discovery starting points, by providing open access to new families of molecules, and emphasize how a small additional investment made to help acquire and distribute compounds, and sharing the data, can catalyze drug discovery for dozens of different indications. Another lesson is that when multiple screens from different groups are run on the same library, results can be integrated quickly to select the most valuable starting points for subsequent medicinal chemistry efforts.
••
TL;DR: Contrary to the long-standing view that working memory depends on sustained, elevated activity, evidence is presented suggesting that humans can hold information in working memory via “activity-silent” synaptic mechanisms and the results support a synaptic theory of working memory.
Abstract: The ability to hold information in working memory is fundamental for cognition. Contrary to the long-standing view that working memory depends on sustained, elevated activity, we present evidence suggesting that humans can hold information in working memory via “activity-silent” synaptic mechanisms. Using multivariate pattern analyses to decode brain activity patterns, we found that the active representation of an item in working memory drops to baseline when attention shifts away. A targeted pulse of transcranial magnetic stimulation produced a brief reemergence of the item in concurrently measured brain activity. This reactivation effect occurred and influenced memory performance only when the item was potentially relevant later in the trial, which suggests that the representation is dynamic and modifiable via cognitive control. The results support a synaptic theory of working memory.
••
TL;DR: A review of recent sensitivity studies of the rapid neutron capture process can be found in this article, which summarizes the extent of such sensitivity studies and highlights how these studies play a key role in facilitating new insight into the r-process nucleosynthesis.
••
TL;DR: The results illustrate the potential for eDNA sampling and metabarcode approaches to improve quantification of aquatic species diversity in natural environments and point the way towards using eDNA metabarcoding as an index of macrofaunal species abundance.
Abstract: Freshwater fauna are particularly sensitive to environmental change and disturbance. Management agencies frequently use fish and amphibian biodiversity as indicators of ecosystem health and a way to prioritize and assess management strategies. Traditional aquatic bioassessment that relies on capture of organisms via nets, traps and electrofishing gear typically has low detection probabilities for rare species and can injure individuals of protected species. Our objective was to determine whether environmental DNA (eDNA) sampling and metabarcoding analysis can be used to accurately measure species diversity in aquatic assemblages with differing structures. We manipulated the density and relative abundance of eight fish and one amphibian species in replicated 206-L mesocosms. Environmental DNA was filtered from water samples, and six mitochondrial gene fragments were Illumina-sequenced to measure species diversity in each mesocosm. Metabarcoding detected all nine species in all treatment replicates. Additionally, we found a modest, but positive relationship between species abundance and sequencing read abundance. Our results illustrate the potential for eDNA sampling and metabarcoding approaches to improve quantification of aquatic species diversity in natural environments and point the way towards using eDNA metabarcoding as an index of macrofaunal species abundance.
••
TL;DR: In this article, the phase separation of halide ion movement in mixed halide films is tracked through excited-state behavior using emission and transient absorption spectroscopy tools, and the time scale with which such separation occurs under laser irradiation (405 nm, 25 mW/cm2 to 1.7 W/cm 2) as well as dark recovery.
Abstract: Mixed halide lead perovskites (e.g., CH3NH3PbBrxI3–x) undergo phase segregation creating iodide-rich and bromide-rich domains when subjected to visible irradiation. This intriguing aspect of halide ion movement in mixed halide films is now being tracked through excited-state behavior using emission and transient absorption spectroscopy tools. These transient experiments have allowed us to establish the time scale with which such separation occurs under laser irradiation (405 nm, 25 mW/cm2 to 1.7 W/cm2) as well as dark recovery. While the phase separation occurs with a rate constant of 0.1–0.3 s–1, the recovery occurs over a time period of several minutes to an hour. The relative photoluminescence quantum yield observed for Br-rich regions (em. max 530 nm) is nearly 2 orders of magnitude lower than that of I-rich regions (em. max 760 nm) and arises from the fact that I-rich regions serve as sinks for photogenerated charge carriers. Understanding such cascading charge transfer to localized sites could furth...
••
TL;DR: This ultrafast vectorial charge-transfer process illustrates the potential of utilizing compositional gradients to direct charge flow in perovskite-based photovoltaics.
Abstract: All-inorganic cesium lead halide (CsPbX3, X = Br–, I–) perovskites could potentially provide comparable photovoltaic performance with enhanced stability compared to organic–inorganic lead halide species. However, small-bandgap cubic CsPbI3 has been difficult to study due to challenges forming CsPbI3 in the cubic phase. Here, a low-temperature procedure to form cubic CsPbI3 has been developed through a halide exchange reaction using films of sintered CsPbBr3 nanocrystals. The reaction was found to be strongly dependent upon temperature, featuring an Arrhenius relationship. Additionally, film thickness played a significant role in determining internal film structure at intermediate reaction times. Thin films (50 nm) showed only a small distribution of CsPbBrxI3–x species, while thicker films (350 nm) exhibited much broader distributions. Furthermore, internal film structure was ordered, featuring a compositional gradient within film. Transient absorption spectroscopy showed the influence of halide exchange ...
••
TL;DR: This paper provided an overview of the causes of all major oil price fluctuations between 1973 and 2014 and discussed why price fluctuations remain difficult to predict, despite economists' improved understanding of oil markets.
Abstract: It has been forty years since the oil crisis of 1973/74. This crisis has been one of the defining economic events of the 1970s and has shaped how many economists think about oil price shocks. In recent years, a large literature on the economic determinants of oil price fluctuations has emerged. Drawing on this literature, we first provide an overview of the causes of all major oil price fluctuations between 1973 and 2014. We then discuss why oil price fluctuations remain difficult to predict, despite economists’ improved understanding of oil markets. Unexpected oil price fluctuations are commonly referred to as oil price shocks. We document that, in practice, consumers, policymakers, financial market participants and economists may have different oil price expectations, and that, what may be surprising to some, need not be equally surprising to others.
••
TL;DR: In this paper, the meta distribution of the SIR is derived for Poisson bipolar and cellular networks with Rayleigh fading, and a simple approximation for it is provided for the point process.
Abstract: The calculation of the SIR distribution at the typical receiver (or, equivalently, the success probability of transmissions over the typical link) in Poisson bipolar and cellular networks with Rayleigh fading is relatively straightforward, but it only provides limited information on the success probabilities of the individual links This paper focuses on the meta distribution of the SIR, which is the distribution of the conditional success probability $P_{\rm{ s}}$ given the point process, and provides bounds, an exact analytical expression, and a simple approximation for it The meta distribution provides fine-grained information on the SIR and answers questions such as “What fraction of users in a Poisson cellular network achieve 90% link reliability if the required SIR is 5 dB?” Interestingly, in the bipolar model, if the transmit probability $p$ is reduced while increasing the network density $\lambda$ such that the density of concurrent transmitters $\lambda p$ stays constant as $p\rightarrow 0$ , $P_{\rm{ s}}$ degenerates to a constant, ie, all links have exactly the same success probability in the limit, which is the one of the typical link In contrast, in the cellular case, if the interfering base stations are active independently with probability $p$ , the variance of $P_{\rm{ s}}$ approaches a non-zero constant when $p$ is reduced to 0 while keeping the mean success probability constant
••
Yale University1, University of Hertfordshire2, Queen Mary University of London3, Carnegie Institution for Science4, Saint Petersburg State University5, University of Chicago6, University of Copenhagen7, Leibniz Institute for Astrophysics Potsdam8, Physical Research Laboratory9, University of Notre Dame10, Pennsylvania State University11, University of Colorado Boulder12, National Institute of Standards and Technology13, University of Geneva14, Harvard University15, University of Texas at Austin16, University of Porto17, New York University18, University of Washington19, Massachusetts Institute of Technology20, Ohio State University21, University of British Columbia22, Aarhus University23, Institut d'Astrophysique de Paris24, Aix-Marseille University25, Spanish National Research Council26, University of California, Santa Cruz27, Cornell University28, Missouri State University29, Lowell Observatory30, Heidelberg University31, University of Göttingen32, INAF33, Space Telescope Science Institute34, University of Southern Queensland35, University of New South Wales36
TL;DR: The Second Workshop on Extreme Precision Radial Velocities defined circa 2015 the state of the art Doppler precision and identified the critical path challenges for reaching 10 cm s−1 measurement precision as mentioned in this paper.
Abstract: The Second Workshop on Extreme Precision Radial Velocities defined circa 2015 the state of the art Doppler precision and identified the critical path challenges for reaching 10 cm s^(−1) measurement precision. The presentations and discussion of key issues for instrumentation and data analysis and the workshop recommendations for achieving this bold precision are summarized here. Beginning with the High Accuracy Radial Velocity Planet Searcher spectrograph, technological advances for precision radial velocity (RV) measurements have focused on building extremely stable instruments. To reach still higher precision, future spectrometers will need to improve upon the state of the art, producing even higher fidelity spectra. This should be possible with improved environmental control, greater stability in the illumination of the spectrometer optics, better detectors, more precise wavelength calibration, and broader bandwidth spectra. Key data analysis challenges for the precision RV community include distinguishing center of mass (COM) Keplerian motion from photospheric velocities (time correlated noise) and the proper treatment of telluric contamination. Success here is coupled to the instrument design, but also requires the implementation of robust statistical and modeling techniques. COM velocities produce Doppler shifts that affect every line identically, while photospheric velocities produce line profile asymmetries with wavelength and temporal dependencies that are different from Keplerian signals. Exoplanets are an important subfield of astronomy and there has been an impressive rate of discovery over the past two decades. However, higher precision RV measurements are required to serve as a discovery technique for potentially habitable worlds, to confirm and characterize detections from transit missions, and to provide mass measurements for other space-based missions. The future of exoplanet science has very different trajectories depending on the precision that can ultimately be achieved with Doppler measurements.
••
TL;DR: In this article, the authors examined intrinsic motivation, creative self-efficacy, and prosocial motivation as distinct motivational mechanisms underlying creativity and found that the three motivational mechanisms functioned differently as mediators between contextual and personal factors and creativity.
••
University of Arizona1, University of California, Santa Cruz2, California Institute of Technology3, Ames Research Center4, University of California, Berkeley5, Georgia State University6, University of Tokyo7, University of North Carolina at Chapel Hill8, Southern Connecticut State University9, Carnegie Learning10, San Diego State University11, Max Planck Society12, University of Sydney13, University of Hertfordshire14, University of California, Los Angeles15, Stockholm University16, Pontifical Catholic University of Chile17, Millennium Institute18, University of Grenoble19, Centre national de la recherche scientifique20, University of Notre Dame21, Stanford University22, University of Liège23
TL;DR: In this article, the first year of the NASA K2 mission (Campaigns 0-4) was used to discover 197 candidates for Earth-like planets, with the results of an intensive program of photometric analyses, stellar spectroscopy, high-resolution imaging and statistical validation.
Abstract: We present 197 planet candidates discovered using data from the first year of the NASA K2 mission (Campaigns 0-4), along with the results of an intensive program of photometric analyses, stellar spectroscopy, high-resolution imaging, and statistical validation. We distill these candidates into sets of 104 validated planets (57 in multi-planet systems), false positives, and 63 remaining candidates. Our validated systems span a range of properties, with median values of RP= 2.3 R⊕, P = 8.6 days, Teff = 5300 K, and Kp = 12.7mag. Stellar spectroscopy provides precise stellar and planetary parameters for most of these systems. We show that K2 has increased by 30% the number of small planets known to orbit moderately bright stars (1-4 R R⊕, Kp = 9-13 mag). Of particular interest are planets smaller than 2 R⊕, orbiting stars brighter than Kp = 11.5 mag, 5 receiving Earth-like irradiation levels, and several multi-planet systems - including 4 planets orbiting the M dwarf K2-72 near mean-motion resonances. By quantifying the likelihood that each candidate is a planet we demonstrate that our candidate sample has an overall false positive rate of 15%-30%, with rates substantially lower for small candidates ( 8 R⊕ and/or with P<3 days. Extrapolation of the current planetary yield suggests that K2 will discover between 500 and 1000 planets in its planned four-year mission, assuming sufficient follow-up resources are available. Efficient observing and analysis, together with an organized and coherent follow-up strategy, are essential for maximizing the efficacy of planet-validation efforts for K2, TESS, and future large-scale surveys.
••
TL;DR: In this article, the impact of female board representation on firm-level strategic behavior within the domain of mergers and acquisitions (M&A) was examined and it was shown that greater female representation on a firm's board will be negatively associated with both the number of acquisitions the firm engages in and, conditional on doing a deal, acquisition size.
Abstract: This study examines the impact of female board representation on firm-level strategic behavior within the domain of mergers and acquisitions (M&A). We build on social identity theory to predict that greater female representation on a firm's board will be negatively associated with both the number of acquisitions the firm engages in and, conditional on doing a deal, acquisition size. Using a comprehensive, multiyear sample of U.S. public firms, we find strong support for our hypotheses. We demonstrate the robustness of our findings through the use of a difference-in-differences analysis on a subsample of firms that experienced exogenous changes in board gender composition as a result of director deaths
••
TL;DR: In this article, the impact of 3D simulations of shell burning in supernova progenitors for the perturbations-aided neutrino-driven mechanism is discussed.
Abstract: Models of neutrino-driven core-collapse supernova explosions have matured considerably in recent years. Explosions of low-mass progenitors can routinely be simulated in 1D, 2D, and 3D. Nucleosynthesis calculations indicate that these supernovae could be contributors of some lighter neutron-rich elements beyond iron. The explosion mechanism of more massive stars remains under investigation, although first 3D models of neutrino-driven explosions employing multi-group neutrino transport have become available. Together with earlier 2D models and more simplified 3D simulations, these have elucidated the interplay between neutrino heating and hydrodynamic instabilities in the post-shock region that is essential for shock revival. However, some physical ingredients may still need to be added/improved before simulations can robustly explain supernova explosions over a wide range of progenitors. Solutions recently suggested in the literature include uncertainties in the neutrino rates, rotation, and seed perturbations from convective shell burning. We review the implications of 3D simulations of shell burning in supernova progenitors for the ‘perturbations-aided neutrino-driven mechanism,’ whose efficacy is illustrated by the first successful multi-group neutrino hydrodynamics simulation of an 18 solar mass progenitor with 3D initial conditions. We conclude with speculations about the impact of 3D effects on the structure of massive stars through convective boundary mixing.
••
TL;DR: In this article, the authors report the results of the statistical analysis of planetary signals discovered in MOA-II microlensing survey alert system events from 2007 to 2012, and determine the survey sensitivity as a function of planet star mass ratio, q, and projected planet star separation, s, in Einstein radius units.
Abstract: We report the results of the statistical analysis of planetary signals discovered in MOA-II microlensing survey alert system events from 2007 to 2012. We determine the survey sensitivity as a function of planet star mass ratio, q, and projected planet star separation, s, in Einstein radius units. We find that the mass-ratio function is not a single power law, but has a change in slope at q approx.10(exp -4), corresponding to approx. 20 Stellar Mass for the median host-star mass of approx. 0.6 M. We find significant planetary signals in 23 of the 1474 alert events that are well-characterized by the MOA-II survey data alone. Data from other groups are used only to characterize planetary signals that have been identified in the MOA data alone. The distribution of mass ratios and separations of the planets found in our sample are well fit by a broken power-law model. We also combine this analysis with the previous analyses of Gould et al. and Cassan et al., bringing the total sample to 30 planets. The unbroken power-law model is disfavored with a p-value of 0.0022, which corresponds to a Bayes factor of 27 favoring the broken power-law model. These results imply that cold Neptunes are likely to be the most common type of planets beyond the snow line.
••
TL;DR: It is argued that the problem with PDSA is the oversimplification of the method as it has been translated into healthcare and the failure to invest in a rigorous and tailored application of the approach.
Abstract: Quality improvement (QI) methods have been introduced to healthcare to support the delivery of care that is safe, timely, effective, efficient, equitable and cost effective. Of the many QI tools and methods, the Plan-Do-Study-Act (PDSA) cycle is one of the few that focuses on the crux of change, the translation of ideas and intentions into action. As such, the PDSA cycle and the concept of iterative tests of change are central to many QI approaches, including the model for improvement,1 lean,2 six sigma3 and total quality management.4
PDSA provides a structured experimental learning approach to testing changes. Previously, concerns have been raised regarding the fidelity of application of PDSA method, which may undermine learning efforts,5 the complexity of its use in practice5 ,6 and as to the appropriateness of the PDSA method to address the significant challenges of healthcare improvement.7
This article presents our reflections on the full potential of using PDSA in healthcare, but in doing so we explore the inherent complexity and multiple challenges of executing PDSA well. Ultimately, we argue that the problem with PDSA is the oversimplification of the method as it has been translated into healthcare and the failure to invest in a rigorous and tailored application of the approach.
The purpose of the PDSA method lies in learning as quickly as possible whether an intervention works in a particular setting and to making adjustments accordingly to increase the chances of delivering and sustaining the desired improvement. In contrast to controlled trials, PDSAs allow new learning to be built in to this experimental process. If problems are identified with the original plan, then the theory can be revised to build on this learning and a subsequent experiment conducted to see if it has resolved the problem, and …
••
TL;DR: High levels of starch in human diets have led to increased levels of glycogen in the vaginal tract, which, in turn, promotes the proliferation of lactobacilli, which may have paved the way for a novel, protective microbiome in human vaginal tracts.
Abstract: The human vaginal microbiome is dominated by bacteria from the genus Lactobacillus, which create an acidic environment thought to protect women against sexually transmitted pathogens and opportunistic infections. Strikingly, lactobacilli dominance appears to be unique to humans; while the relative abundance of lactobacilli in the human vagina is typically >70%, in other mammals lactobacilli rarely comprise more than 1% of vaginal microbiota. Several hypotheses have been proposed to explain humans' unique vaginal microbiota, including humans' distinct reproductive physiology, high risk of STDs, and high risk of microbial complications linked to pregnancy and birth. Here, we test these hypotheses using comparative data on vaginal pH and the relative abundance of lactobacilli in 26 mammalian species and 50 studies (N=21 mammals for pH and 14 mammals for lactobacilli abundance). We found that non-human mammals, like humans, exhibit the lowest vaginal pH during the period of highest estrogen. However, the vaginal pH of non-human mammals is never as low as is typical for humans (median vaginal pH in humans = 4.5; range of pH across all 21 non-human mammals = 5.4 to 7.8). Contrary to disease and obstetric risk hypotheses, we found no significant relationship between vaginal pH or lactobacilli abundance and multiple metrics of STD or birth injury risk (P-values ranged from 0.13 to 0.99). Given the lack of evidence for these hypotheses, we discuss two alternative explanations: the common function hypothesis and a novel hypothesis related to the diet of agricultural humans. Specifically, with regard to diet we propose that high levels of starch in human diets have led to increased levels of glycogen in the vaginal tract, which, in turn, promotes the proliferation of lactobacilli. If true, human diet may have paved the way for a novel, protective microbiome in human vaginal tracts. Overall, our results highlight the need for continuing research on non-human vaginal microbial communities and the importance of investigating both the physiological mechanisms and the broad evolutionary processes underlying human lactobacilli dominance.
••
TL;DR: Bovy et al. as mentioned in this paper used data on 14,699 red-clump stars from the APOGEE survey, covering 4 kpc∼ R≳15 kpc, to determine the structure of mono-abundance populations (MAPs)-stars in narrow bins in [α/Fe] and [Fe H]-accounting for the complex effects of the selection function and the spatially variable dust obscuration.
Abstract: Author(s): Bovy, J; Rix, HW; Schlafly, EF; Nidever, DL; Holtzman, JA; Shetrone, M; Beers, TC | Abstract: The spatial structure of stellar populations with different chemical abundances in the Milky Way (MW) contains a wealth of information on Galactic evolution over cosmic time. We use data on 14,699 red-clump stars from the APOGEE survey, covering 4 kpc∼ R≳15 kpc, to determine the structure of mono-abundance populations (MAPs)-stars in narrow bins in [α/Fe] and [Fe H]-accounting for the complex effects of the APOGEE selection function and the spatially variable dust obscuration. We determine that all MAPs with enhanced [α/Fe] are centrally concentrated and are well-described as exponentials with a scale length of 2.2 ± 0.2 kpc over the whole radial range of the disk. We discover that the surface-density profiles of low-[α/Fe] MAPs are complex: they do not monotonically decrease outwards, but rather display a peak radius ranging from ≈5 to ≈13 kpc at low [Fe H]. The extensive radial coverage of the data allows us to measure radial trends in the thickness of each MAP. While high-[α/Fe] MAPs have constant scale heights, low-[α/Fe] MAPs flare. We confirm, now with highprecision abundances, previous results that each MAP contains only a single vertical scale height and that low- [Fe H], low-[α/Fe] and high-[Fe H], high-[α/Fe] MAPs have intermediate (hZ≈300600 pc) scale heights that smoothly bridge the traditional thin- and thick-disk divide. That the high-[α/Fe], thick disk components do not flare is strong evidence against their thickness being caused by radial migration. The correspondence between the radial structure and chemical-enrichment age of stellar populations is clear confirmation of the inside-out growth of galactic disks. The details of these relations will constrain the variety of physical conditions under which stars form throughout the MW disk.
••
TL;DR: Using single-molecule measurements and biochemical assays, it is demonstrated that HDAC6 catalytic domain 2 deacetylated α-tubulin lysine 40 in the lumen of microtubules, but that its preferred substrate was unpolymerized tubulin.
Abstract: We report crystal structures of zebrafish histone deacetylase 6 (HDAC6) catalytic domains in tandem or as single domains in complex with the (R) and (S) enantiomers of trichostatin A (TSA) or with the HDAC6-specific inhibitor nexturastat A. The tandem domains formed, together with the inter-domain linker, an ellipsoid-shaped complex with pseudo-twofold symmetry. We identified important active site differences between both catalytic domains and revealed the binding mode of HDAC6 selective inhibitors. HDAC inhibition assays with (R)- and (S)-TSA showed that (R)-TSA was a broad-range inhibitor, whereas (S)-TSA had moderate selectivity for HDAC6. We identified a uniquely positioned α-helix and a flexible tryptophan residue in the loop joining α-helices H20 to H21 as critical for deacetylation of the physiologic substrate tubulin. Using single-molecule measurements and biochemical assays we demonstrated that HDAC6 catalytic domain 2 deacetylated α-tubulin lysine 40 in the lumen of microtubules, but that its preferred substrate was unpolymerized tubulin.
••
TL;DR: This work investigates the design of acoustic metasurfaces to control elastic guided waves in thin-walled structural elements and shows that anomalous refraction can be achieved on transmitted antisymmetric modes (A_{0}) either when using a symmetric (S_0}) or antisy mmetric (A_0) incident wave, the former clearly involving mode conversion.
Abstract: The concept of a metasurface opens new exciting directions to engineer the refraction properties in both optical and acoustic media. Metasurfaces are typically designed by assembling arrays of subwavelength anisotropic scatterers able to mold incoming wave fronts in rather unconventional ways. The concept of a metasurface was pioneered in photonics and later extended to acoustics while its application to the propagation of elastic waves in solids is still relatively unexplored. We investigate the design of acoustic metasurfaces to control elastic guided waves in thin-walled structural elements. These engineered discontinuities enable the anomalous refraction of guided wave modes according to the generalized Snell's law. The metasurfaces are made out of locally resonant toruslike tapers enabling an accurate phase shift of the incoming wave, which ultimately affects the refraction properties. We show that anomalous refraction can be achieved on transmitted antisymmetric modes (A_{0}) either when using a symmetric (S_{0}) or antisymmetric (A_{0}) incident wave, the former clearly involving mode conversion. The same metasurface design also allows achieving structure embedded planar focal lenses and phase masks for nonparaxial propagation.