Showing papers by "University of Notre Dame published in 2011"
••
23 Oct 2011TL;DR: The degree distribution, twopoint correlations, and clustering are the studied topological properties and an evolution of networks is studied to shed light on the influence the dynamics has on the network topology.
Abstract: Networks have become a general concept to model the structure of arbitrary relationships among entities. The framework of a network introduces a fundamentally new approach apart from ‘classical’ structures found in physics to model the topology of a system. In the context of networks fundamentally new topological effects can emerge and lead to a class of topologies which are termed ‘complex networks’. The concept of a network successfully models the static topology of an empirical system, an arbitrary model, and a physical system. Generally networks serve as a host for some dynamics running on it in order to fulfill a function. The question of the reciprocal relationship among a dynamical process on a network and its topology is the context of this Thesis. This context is studied in both directions. The network topology constrains or enhances the dynamics running on it, while the reciprocal interaction is of the same importance. Networks are commonly the result of an evolutionary process, e.g. protein interaction networks from biology. Within such an evolution the dynamics shapes the underlying network topology with respect to an optimal achievement of the function to perform. Answering the question what the influence on a dynamics of a particular topological property has requires the accurate control over the topological properties in question. In this Thesis the degree distribution, twopoint correlations, and clustering are the studied topological properties. These are motivated by the ubiquitous presence and importance within almost all empirical networks. An analytical framework to measure and to control such quantities of networks along with numerical algorithms to generate them is developed in a first step. Networks with the examined topological properties are then used to reveal their impact on two rather general dynamics on networks. Finally, an evolution of networks is studied to shed light on the influence the dynamics has on the network topology.
2,720 citations
••
Space Telescope Science Institute1, University of California, Santa Cruz2, Johns Hopkins University3, Rutgers University4, Durham University5, University of Nottingham6, Harvard University7, University of Innsbruck8, University of Michigan9, DSM10, University of Edinburgh11, University of Massachusetts Amherst12, California Institute of Technology13, UK Astronomy Technology Centre14, University of California, Irvine15, Swinburne University of Technology16, University of Arizona17, Goddard Space Flight Center18, The Catholic University of America19, Hebrew University of Jerusalem20, University of Victoria21, University of California, Berkeley22, Texas A&M University23, University of Notre Dame24, Carnegie Institution for Science25, Smithsonian Institution26, Yale University27, University of Missouri–Kansas City28, University of California, Riverside29, Max Planck Society30, University of Pittsburgh31, Inter-University Centre for Astronomy and Astrophysics32, University of Barcelona33, European Southern Observatory34, University of Minnesota35, National Research Council36, Western Kentucky University37, Stanford University38, Atacama Large Millimeter Submillimeter Array39, University of Missouri40
TL;DR: The Cosmic Assembly Near-IR Deep Extragalactic Legacy Survey (CANDELS) as discussed by the authors was designed to document the first third of galactic evolution, from z approx. 8 - 1.5 to test their accuracy as standard candles for cosmology.
Abstract: The Cosmic Assembly Near-IR Deep Extragalactic Legacy Survey (CANDELS) is designed to document the first third of galactic evolution, from z approx. 8 - 1.5. It will image > 250,000 distant galaxies using three separate cameras on the Hubble Space Tele8cope, from the mid-UV to near-IR, and will find and measure Type Ia supernovae beyond z > 1.5 to test their accuracy as standard candles for cosmology. Five premier multi-wavelength sky regions are selected, each with extensive ancillary data. The use of five widely separated fields mitigates cosmic variance and yields statistically robust and complete samples of galaxies down to a stellar mass of 10(exp 9) solar mass to z approx. 2, reaching the knee of the UV luminosity function of galaxies to z approx. 8. The survey covers approximately 800 square arc minutes and is divided into two parts. The CANDELS/Deep survey (5(sigma) point-source limit H =27.7mag) covers approx. 125 square arcminutes within GOODS-N and GOODS-S. The CANDELS/Wide survey includes GOODS and three additional fields (EGS, COSMOS, and UDS) and covers the full area to a 50(sigma) point-source limit of H ? or approx. = 27.0 mag. Together with the Hubble Ultradeep Fields, the strategy creates a three-tiered "wedding cake" approach that has proven efficient for extragalactic surveys. Data from the survey are non-proprietary and are useful for a wide variety of science investigations. In this paper, we describe the basic motivations for the survey, the CANDELS team science goals and the resulting observational requirements, the field selection and geometry, and the observing design.
2,088 citations
••
Space Telescope Science Institute1, University of California, Santa Cruz2, Johns Hopkins University3, Western Kentucky University4, University of Massachusetts Amherst5, Carnegie Institution for Science6, European Southern Observatory7, Ohio State University8, Rutgers University9, Durham University10, University of Nottingham11, Max Planck Society12, University of Innsbruck13, University of Michigan14, French Alternative Energies and Atomic Energy Commission15, University of Edinburgh16, Harvard University17, California Institute of Technology18, University of California, Irvine19, Swinburne University of Technology20, University of Arizona21, Goddard Space Flight Center22, Hebrew University of Jerusalem23, Victoria University, Australia24, DSM25, University of California, Berkeley26, Texas A&M University27, University of Notre Dame28, Smithsonian Institution29, Yale University30, University of Missouri–Kansas City31, University of California, Riverside32, Imperial College London33, University of Pittsburgh34, Inter-University Centre for Astronomy and Astrophysics35, National Research Council36, Stanford University37
TL;DR: In this paper, the authors describe the Hubble Space Telescope imaging data products and data reduction procedures for the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS).
Abstract: This paper describes the Hubble Space Telescope imaging data products and data reduction procedures for the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS). This survey is designed to document the evolution of galaxies and black holes at z 1.5-8, and to study Type Ia supernovae at z > 1.5. Five premier multi-wavelength sky regions are selected, each with extensive multi-wavelength observations. The primary CANDELS data consist of imaging obtained in the Wide Field Camera 3 infrared channel (WFC3/IR) and the WFC3 ultraviolet/optical channel, along with the Advanced Camera for Surveys (ACS). The CANDELS/Deep survey covers ~125 arcmin2 within GOODS-N and GOODS-S, while the remainder consists of the CANDELS/Wide survey, achieving a total of ~800 arcmin2 across GOODS and three additional fields (Extended Groth Strip, COSMOS, and Ultra-Deep Survey). We summarize the observational aspects of the survey as motivated by the scientific goals and present a detailed description of the data reduction procedures and products from the survey. Our data reduction methods utilize the most up-to-date calibration files and image combination procedures. We have paid special attention to correcting a range of instrumental effects, including charge transfer efficiency degradation for ACS, removal of electronic bias-striping present in ACS data after Servicing Mission 4, and persistence effects and other artifacts in WFC3/IR. For each field, we release mosaics for individual epochs and eventual mosaics containing data from all epochs combined, to facilitate photometric variability studies and the deepest possible photometry. A more detailed overview of the science goals and observational design of the survey are presented in a companion paper.
2,011 citations
••
TL;DR: This Review highlights mechanisms that have evolved in microorganisms to allow them to successfully enter and exit a dormant state, and discusses the implications of microbial seed banks for evolutionary dynamics, population persistence, maintenance of biodiversity, and the stability of ecosystem processes.
Abstract: Dormancy is a bet-hedging strategy used by a wide range of taxa, including microorganisms. It refers to an organism's ability to enter a reversible state of low metabolic activity when faced with unfavourable environmental conditions. Dormant microorganisms generate a seed bank, which comprises individuals that are capable of being resuscitated following environmental change. In this Review, we highlight mechanisms that have evolved in microorganisms to allow them to successfully enter and exit a dormant state, and discuss the implications of microbial seed banks for evolutionary dynamics, population persistence, maintenance of biodiversity, and the stability of ecosystem processes.
1,399 citations
••
TL;DR: A new freshwater lake phylogeny constructed from all published 16S rRNA gene sequences from lake epilimnia is presented and a unifying vocabulary to discuss freshwater taxa is proposed, providing a coherent framework for future studies.
Abstract: Freshwater bacteria are at the hub of biogeochemical cycles and control water quality in lakes. Despite this, little is known about the identity and ecology of functionally significant lake bacteria. Molecular studies have identified many abundant lake bacteria, but there is a large variation in the taxonomic or phylogenetic breadths among the methods used for this exploration. Because of this, an inconsistent and overlapping naming structure has developed for freshwater bacteria, creating a significant obstacle to identifying coherent ecological traits among these groups. A discourse that unites the field is sorely needed. Here we present a new freshwater lake phylogeny constructed from all published 16S rRNA gene sequences from lake epilimnia and propose a unifying vocabulary to discuss freshwater taxa. With this new vocabulary in place, we review the current information on the ecology, ecophysiology, and distribution of lake bacteria and highlight newly identified phylotypes. In the second part of our review, we conduct meta-analyses on the compiled data, identifying distribution patterns for bacterial phylotypes among biomes and across environmental gradients in lakes. We conclude by emphasizing the role that this review can play in providing a coherent framework for future studies.
1,230 citations
•
01 Jan 2011TL;DR: This article explains what adjusted predictions and marginal effects are, and how they can contribute to the interpretation of results, and shows how the marginsplot command provides a graphical and often much easier means for presenting and understanding the results from margins.
Abstract: As Long & Freese show, it can often be helpful to compute predicted/expected values for hypothetical or prototypical cases. Stata 11 introduced new tools for making such calculations – factor variables and the margins command. These can do many of the things that were previously done by Stata’s own adjust and mfx commands, as well as Long & Freese’s spost9 commands like prvalue. Unfortunately, the complexity of the margins syntax, the daunting 50 page reference manual entry that describes it, and a lack of understanding about what margins offers over older commands may have dissuaded researchers from using it. This paper therefore shows how margins can easily replicate analyses done by older commands. It demonstrates how margins provides a superior means for dealing with interdependent variables (e.g. X and X^2; X1, X2, and X1 * X2; multiple dummies created from a single categorical variable), and is also superior for data that are svyset. The paper explains how the new asobserved option works and the substantive reasons for preferring it over the atmeans approach used by older commands. The paper primarily focuses on the computation of adjusted predictions but also shows how margins has the same advantages for computing marginal effects.
1,228 citations
••
1,222 citations
••
Indiana University1, Utah State University2, University of Notre Dame3, University of New Hampshire4, University of California, Santa Barbara5, University of Tokyo6, United States Department of Energy7, Ludwig Maximilian University of Munich8, J. Craig Venter Institute9, National Institutes of Health10, University of Illinois at Urbana–Champaign11, Hebrew University of Jerusalem12, University of North Texas13, Harvard University14, Research Institute of Molecular Pathology15, University of Geneva16, Oregon State University17, Utrecht University18, University of California, Davis19, Hoffmann-La Roche20, University of Iowa21, University of Strasbourg22, University of Washington23, University of Texas at Arlington24, University of California, Santa Cruz25, Life Technologies26, New York University27, University of Guelph28, Imperial College London29, University of California, Berkeley30
TL;DR: The Daphnia genome reveals a multitude of genes and shows adaptation through gene family expansions, and the coexpansion of gene families interacting within metabolic pathways suggests that the maintenance of duplicated genes is not random.
Abstract: We describe the draft genome of the microcrustacean Daphnia pulex, which is only 200 megabases and contains at least 30,907 genes. The high gene count is a consequence of an elevated rate of gene duplication resulting in tandem gene clusters. More than a third of Daphnia's genes have no detectable homologs in any other available proteome, and the most amplified gene families are specific to the Daphnia lineage. The coexpansion of gene families interacting within metabolic pathways suggests that the maintenance of duplicated genes is not random, and the analysis of gene expression under different environmental conditions reveals that numerous paralogs acquire divergent expression patterns soon after duplication. Daphnia-specific genes, including many additional loci within sequenced regions that are otherwise devoid of annotations, are the most responsive genes to ecological challenges.
1,204 citations
••
TL;DR: A distributed event-triggering scheme, where a subsystem broadcasts its state information to its neighbors only when the subsystem's local state error exceeds a specified threshold, is proposed, which is able to make broadcast decisions using its locally sampled data.
Abstract: This paper examines event-triggered data transmission in distributed networked control systems with packet loss and transmission delays. We propose a distributed event-triggering scheme, where a subsystem broadcasts its state information to its neighbors only when the subsystem's local state error exceeds a specified threshold. In this scheme, a subsystem is able to make broadcast decisions using its locally sampled data. It can also locally predict the maximal allowable number of successive data dropouts (MANSD) and the state-based deadlines for transmission delays. Moreover, the designer's selection of the local event for a subsystem only requires information on that individual subsystem. Our analysis applies to both linear and nonlinear subsystems. Designing local events for a nonlinear subsystem requires us to find a controller that ensures that subsystem to be input-to-state stable. For linear subsystems, the design problem becomes a linear matrix inequality feasibility problem. With the assumption that the number of each subsystem's successive data dropouts is less than its MANSD, we show that if the transmission delays are zero, the resulting system is finite-gain Lp stable. If the delays are bounded by given deadlines, the system is asymptotically stable. We also show that those state-based deadlines for transmission delays are always greater than a positive constant.
1,134 citations
••
TL;DR: Quantitative comparisons with traditional fisheries surveillance tools illustrate the greater sensitivity of eDNA and reveal that the risk of invasion to the Laurentian Great Lakes is imminent.
Abstract: Effective management of rare species, including endangered native species and recently introduced nonindigenous species, requires the detection of populations at low density. For endangered species, detecting the localized distribution makes it possible to identify and protect critical habitat to enhance survival or reproductive success. Similarly, early detection of an incipient invasion by a harmful species increases the feasibility of rapid responses to eradicate the species or contain its spread. Here we demonstrate the efficacy of environmental DNA (eDNA) as a detection tool in freshwater environments. Specifically, we delimit the invasion fronts of two species of Asian carps in Chicago, Illinois, USA area canals and waterways. Quantitative comparisons with traditional fisheries surveillance tools illustrate the greater sensitivity of eDNA and reveal that the risk of invasion to the Laurentian Great Lakes is imminent.
965 citations
••
Indiana University1, Pasteur Institute2, Washington University in St. Louis3, University of British Columbia4, Cubist Pharmaceuticals5, University of Turku6, Lahey Hospital & Medical Center7, Harvard University8, University of Medicine and Dentistry of New Jersey9, The Evergreen State College10, Wayne State University11, Tufts University12, Northeastern University13, University of California, Los Angeles14, University of Notre Dame15, University of Birmingham16, MedImmune17, Rockefeller University18, Université catholique de Louvain19, Cardiff University20, Cold Spring Harbor Laboratory21, Robert Koch Institute22, McMaster University23, University of Oklahoma24
TL;DR: To explore how the problem of antibiotic resistance might best be addressed, a group of 30 scientists from academia and industry gathered at the Banbury Conference Centre in Cold Spring Harbor, New York, USA, from 16 to 18 May 2011.
Abstract: The development and spread of antibiotic resistance in bacteria is a universal threat to both humans and animals that is generally not preventable but can nevertheless be controlled, and it must be tackled in the most effective ways possible. To explore how the problem of antibiotic resistance might best be addressed, a group of 30 scientists from academia and industry gathered at the Banbury Conference Centre in Cold Spring Harbor, New York, USA, from 16 to 18 May 2011. From these discussions there emerged a priority list of steps that need to be taken to resolve this global crisis.
••
TL;DR: Cross-sectional analyses can imply the existence of a substantial indirect effect even when the true longitudinal indirect effect is zero, and a variable that is found to be a strong mediator in a cross-sectional analysis may not be a mediator at all in a longitudinal analysis.
Abstract: Maxwell and Cole (2007) showed that cross-sectional approaches to mediation typically generate substantially biased estimates of longitudinal parameters in the special case of complete mediation. However, their results did not apply to the more typical case of partial mediation. We extend their previous work by showing that substantial bias can also occur with partial mediation. In particular, cross-sectional analyses can imply the existence of a substantial indirect effect even when the true longitudinal indirect effect is zero. Thus, a variable that is found to be a strong mediator in a cross-sectional analysis may not be a mediator at all in a longitudinal analysis. In addition, we show that very different combinations of longitudinal parameter values can lead to essentially identical cross-sectional correlations, raising serious questions about the interpretability of cross-sectional mediation data. More generally, researchers are encouraged to consider a wide variety of possible mediation models beyond simple cross-sectional models, including but not restricted to autoregressive models of change.
••
TL;DR: In this article, the transverse momentum balance in dijet and γ/Z+jets events is used to measure the jet energy response in the CMS detector, as well as the transversal momentum resolution.
Abstract: Measurements of the jet energy calibration and transverse momentum resolution in CMS are presented, performed with a data sample collected in proton-proton collisions at a centre-of-mass energy of 7TeV, corresponding to an integrated luminosity of 36pb−1. The transverse momentum balance in dijet and γ/Z+jets events is used to measure the jet energy response in the CMS detector, as well as the transverse momentum resolution. The results are presented for three different methods to reconstruct jets: a calorimeter-based approach, the ``Jet-Plus-Track'' approach, which improves the measurement of calorimeter jets by exploiting the associated tracks, and the ``Particle Flow'' approach, which attempts to reconstruct individually each particle in the event, prior to the jet clustering, based on information from all relevant subdetectors
•
TL;DR: In this paper, the authors developed a bid-ask spread estimator from daily high and low prices, which can be applied in a variety of research areas, and generally outperforms other low-frequency estimators.
Abstract: We develop a bid-ask spread estimator from daily high and low prices. Daily high (low) prices are almost always buy (sell) trades. Hence, the high-low ratio reflects both the stock’s variance and its bid-ask spread. While the variance component of the high-low ratio is proportional to the return interval, the spread component is not. This allows us to derive a spread estimator as a function of high-low ratios over one-day and two-day intervals. The estimator is easy to calculate, can be applied in a variety of research areas, and generally outperforms other low-frequency estimators.
••
University of Barcelona1, University of Melbourne2, University of Valle3, University of Ghana4, University of Notre Dame5, University of Bamako6, University of London7, National Institutes of Health8, University of Maryland, Baltimore9, World Health Organization10, Imperial College London11, Centers for Disease Control and Prevention12, Swiss Tropical and Public Health Institute13
TL;DR: The Malaria Eradication Research Agenda initiative and the set of articles published in this PLoS Medicine Supplement that distill the research questions key to malaria eradication are introduced.
Abstract: The interruption of malaria transmission worldwide is one of the greatest challenges for international health and development communities. The current expert view suggests that, by aggressively scaling up control with currently available tools and strategies, much greater gains could be achieved against malaria, including elimination from a number of countries and regions; however, even with maximal effort we will fall short of global eradication. The Malaria Eradication Research Agenda (malERA) complements the current research agenda—primarily directed towards reducing morbidity and mortality—with one that aims to identify key knowledge gaps and define the strategies and tools that will result in reducing the basic reproduction rate to less than 1, with the ultimate aim of eradication of the parasite from the human population. Sustained commitment from local communities, civil society, policy leaders, and the scientific community, together with a massive effort to build a strong base of researchers from the endemic areas will be critical factors in the success of this new agenda.
••
TL;DR: In this article, the authors studied the effect of collision centrality on the transverse momentum of PbPb collisions at the LHC with a data sample of 6.7 inverse microbarns.
Abstract: Jet production in PbPb collisions at a nucleon-nucleon center-of-mass energy of 2.76 TeV was studied with the CMS detector at the LHC, using a data sample corresponding to an integrated luminosity of 6.7 inverse microbarns. Jets are reconstructed using the energy deposited in the CMS calorimeters and studied as a function of collision centrality. With increasing collision centrality, a striking imbalance in dijet transverse momentum is observed, consistent with jet quenching. The observed effect extends from the lower cut-off used in this study (jet transverse momentum = 120 GeV/c) up to the statistical limit of the available data sample (jet transverse momentum approximately 210 GeV/c). Correlations of charged particle tracks with jets indicate that the momentum imbalance is accompanied by a softening of the fragmentation pattern of the second most energetic, away-side jet. The dijet momentum balance is recovered when integrating low transverse momentum particles distributed over a wide angular range relative to the direction of the away-side jet.
••
TL;DR: Graphene-based assemblies are gaining attention as a viable alternate to boost the efficiency of various catalytic and storage reactions in energy conversion applications as discussed by the authors, and the use of reduced graphene oxide has already proved useful in collecting and transporting charge in photoelectrochemical solar cells, photocatalysis, and electrocatalysis.
Abstract: Graphene-based assemblies are gaining attention as a viable alternate to boost the efficiency of various catalytic and storage reactions in energy conversion applications. The use of reduced graphene oxide has already proved useful in collecting and transporting charge in photoelectrochemical solar cells, photocatalysis, and electrocatalysis. In many of these applications, the flat carbon serves as a scaffold to anchor metal and semiconductor nanoparticles and assists in promoting selectivity and efficiency of the catalytic process. Covalent and noncovalent interaction with organic molecules is another area that is expected to provide new frontiers in graphene research. Recent advances in manipulating graphene-based two-dimensional carbon architecture for energy conversion are described.
••
TL;DR: Interestingly, the films which exhibited the fastest electron transfer rates were not the same as those which showed the highest photocurrent, suggesting that, in addition to electron transfer at the quantum dot-metal oxide interface, other electron transfer reactions play key roles in the determination of overall device efficiency.
Abstract: Quantum dot-metal oxide junctions are an integral part of next-generation solar cells, light emitting diodes, and nanostructured electronic arrays. Here we present a comprehensive examination of electron transfer at these junctions, using a series of CdSe quantum dot donors (sizes 2.8, 3.3, 4.0, and 4.2 nm in diameter) and metal oxide nanoparticle acceptors (SnO2, TiO2, and ZnO). Apparent electron transfer rate constants showed strong dependence on change in system free energy, exhibiting a sharp rise at small driving forces followed by a modest rise further away from the characteristic reorganization energy. The observed trend mimics the predicted behavior of electron transfer from a single quantum state to a continuum of electron accepting states, such as those present in the conduction band of a metal oxide nanoparticle. In contrast with dye-sensitized metal oxide electron transfer studies, our systems did not exhibit unthermalized hot-electron injection due to relatively large ratios of electron cooling rate to electron transfer rate. To investigate the implications of these findings in photovoltaic cells, quantum dot-metal oxide working electrodes were constructed in an identical fashion to the films used for the electron transfer portion of the study. Interestingly, the films which exhibited the fastest electron transfer rates (SnO2) were not the same as those which showed the highest photocurrent (TiO2). These findings suggest that, in addition to electron transfer at the quantum dot-metal oxide interface, other electron transfer reactions play key roles in the determination of overall device efficiency.
••
University of Washington1, University of Wisconsin-Madison2, Texas A&M University3, National Taiwan University4, University of Genoa5, University of Notre Dame6, TRIUMF7, Lawrence Berkeley National Laboratory8, Yale University9, Hebrew University of Jerusalem10, University of Naples Federico II11, Colorado School of Mines12, Weizmann Institute of Science13, Osaka University14, University of South Carolina15, Goethe University Frankfurt16, University of Pisa17, Argonne National Laboratory18, Sungkyunkwan University19, Old Dominion University20, Thomas Jefferson National Accelerator Facility21, University of Catania22, Ruhr University Bochum23, GSI Helmholtz Centre for Heavy Ion Research24, Technische Universität München25
TL;DR: The available data on nuclear fusion cross sections important to energy generation in the Sun and other hydrogen-burning stars and to solar neutrino production are summarized and critically evaluated in this article.
Abstract: The available data on nuclear fusion cross sections important to energy generation in the Sun and other hydrogen-burning stars and to solar neutrino production are summarized and critically evaluated. Recommended values and uncertainties are provided for key cross sections, and a recommended spectrum is given for {sup 8}B solar neutrinos. Opportunities for further increasing the precision of key rates are also discussed, including new facilities, new experimental techniques, and improvements in theory. This review, which summarizes the conclusions of a workshop held at the Institute for Nuclear Theory, Seattle, in January 2009, is intended as a 10-year update and supplement to 1998, Rev. Mod. Phys. 70, 1265.
••
TL;DR: In this article, the authors acknowledge the partial support of the National Science Foundation Graduate Fellowship and the National Defense Science and Engineering Graduate Fellowship for a research grant from King Abdullah University of Science and Technology (KAUST) and Stanford University.
Abstract: The first author acknowledges the partial support by a National Science Foundation Graduate Fellowship and the partial support by a National Defense Science and Engineering Graduate Fellowship. The second and third authors acknowledge the partial support by the Motor Sports Division of the Toyota Motor Corporation under Agreement Number 48737, and the partial support by a research grant from the Academic Excellence Alliance program between King Abdullah University of Science and Technology (KAUST) and Stanford University. All authors also acknowledge the constructive comments received during the review process.
••
University of Notre Dame1, Michigan State University2, University of New Hampshire3, University of Wyoming4, Oak Ridge National Laboratory5, University of Tennessee6, Marine Biological Laboratory7, Oregon State University8, University of Maryland, College Park9, University of New Mexico10, Kansas State University11, United States Department of Agriculture12, Montana State University13, Virginia Tech14, Central Washington University15, Ball State University16, Wright State University17, University of Georgia18, Indiana University19, University of Canterbury20, Arizona State University21, United States Geological Survey22, Washington State University Vancouver23
TL;DR: It is found that stream denitrification produces N2O at rates that increase with stream water nitrate (NO3−) concentrations, but that <1% of denitrified N is converted to N1O, and it is suggested that increased stream NO3− loading stimulatesDenitrification and concomitant N2o production, but does not increase the N2 O yield.
Abstract: Nitrous oxide (N2O) is a potent greenhouse gas that contributes to climate change and stratospheric ozone destruction. Anthropogenic nitrogen (N) loading to river networks is a potentially important source of N2O via microbial denitrification that converts N to N2O and dinitrogen (N2). The fraction of denitrified N that escapes as N2O rather than N2 (i.e., the N2O yield) is an important determinant of how much N2O is produced by river networks, but little is known about the N2O yield in flowing waters. Here, we present the results of whole-stream 15N-tracer additions conducted in 72 headwater streams draining multiple land-use types across the United States. We found that stream denitrification produces N2O at rates that increase with stream water nitrate (NO3−) concentrations, but that <1% of denitrified N is converted to N2O. Unlike some previous studies, we found no relationship between the N2O yield and stream water NO3−. We suggest that increased stream NO3− loading stimulates denitrification and concomitant N2O production, but does not increase the N2O yield. In our study, most streams were sources of N2O to the atmosphere and the highest emission rates were observed in streams draining urban basins. Using a global river network model, we estimate that microbial N transformations (e.g., denitrification and nitrification) convert at least 0.68 Tg·y−1 of anthropogenic N inputs to N2O in river networks, equivalent to 10% of the global anthropogenic N2O emission rate. This estimate of stream and river N2O emissions is three times greater than estimated by the Intergovernmental Panel on Climate Change.
••
TL;DR: Kim et al. as discussed by the authors presented a non-nucleophilic electrolyte from hexamethyldisilazide magnesium chloride and aluminium trichloride, and showed its compatibility with a sulphur cathode.
Abstract: Magnesium is an ideal rechargeable battery anode material, but coupling it with a low-cost sulphur cathode, requires a non-nucleophilic electrolyte. Kim et al. prepare a non-nucleophilic electrolyte from hexamethyldisilazide magnesium chloride and aluminium trichloride, and show its compatibility with a sulphur cathode.
••
Boston University1, University of Notre Dame2, Pontifical Catholic University of Chile3, University of Florida4, University of California, Berkeley5, University of Michigan6, Georgetown University7, Case Western Reserve University8, University of Texas at Austin9, Emory University10, Aarhus University11, Lund University12
TL;DR: In this article, the authors argue for an approach to conceptualize and measure regimes such that meaningful comparisons can be made through time and across countries, and review some of the payoffs such an approach might bring to the study of democracy.
Abstract: InthewakeoftheColdWar,democracyhasgainedthestatusofamantra.Yetthereisnoconsensusabouthowtoconceptualizeand measure regimes such that meaningful comparisons can be made through time and across countries. In this prescriptive article, we argueforanewapproachtoconceptualizationandmeasurement.Wefirstreviewsomeoftheweaknessesamongtraditionalapproaches. Wethenlayoutourapproach,whichmaybecharacterizedas historical, multidimensional, disaggregated,and transparent.Weendby reviewing some of the payoffs such an approach might bring to the study of democracy.
••
TL;DR: The resulting integrated SWAN + ADCIRC system is highly scalable and allows for localized increases in resolution without the complexity or cost of nested meshes or global interpolation between heterogeneous meshes.
••
TL;DR: A wide range of promising laboratory and consumer biotechnological applications from microscale genetic and proteomic analysis kits, cell culture and manipulation platforms, biosensors, and pathogen detection systems to point-of-care diagnostic devices, high-throughput combinatorial drug screening platforms, schemes for targeted drug delivery and advanced therapeutics, and novel biomaterials synthesis for tissue engineering are reviewed.
Abstract: Harnessing the ability to precisely and reproducibly actuate fluids and manipulate bioparticles such as DNA, cells, and molecules at the microscale, microfluidics is a powerful tool that is currently revolutionizing chemical and biological analysis by replicating laboratory bench-top technology on a miniature chip-scale device, thus allowing assays to be carried out at a fraction of the time and cost while affording portability and field-use capability. Emerging from a decade of research and development in microfluidic technology are a wide range of promising laboratory and consumer biotechnological applications from microscale genetic and proteomic analysis kits, cell culture and manipulation platforms, biosensors, and pathogen detection systems to point-of-care diagnostic devices, high-throughput combinatorial drug screening platforms, schemes for targeted drug delivery and advanced therapeutics, and novel biomaterials synthesis for tissue engineering. The developments associated with these technological advances along with their respective applications to date are reviewed from a broad perspective and possible future directions that could arise from the current state of the art are discussed.
••
TL;DR: Reduced graphene oxide (RGO)-Cu2S composite has now succeeded in shuttling electrons through the RGO sheets and polysulfide-active Cu2S more efficiently than Pt electrode, improving the fill factor by ∼75%.
Abstract: Polysulfide electrolyte that is employed as a redox electrolyte in quantum dot sensitized solar cells provides stability to the cadmium chalcogenide photoanode but introduces significant redox limitations at the counter electrode through undesirable surface reactions. By designing reduced graphene oxide (RGO)-Cu2S composite, we have now succeeded in shuttling electrons through the RGO sheets and polysulfide-active Cu2S more efficiently than Pt electrode, improving the fill factor by ∼75%. The composite material characterized and optimized at different compositions indicates a Cu/RGO mass ratio of 4 provides the best electrochemical performance. A sandwich CdSe quantum dot sensitized solar cell constructed using the optimized RGO-Cu2S composite counter electrode exhibited an unsurpassed power conversion efficiency of 4.4%.
••
TL;DR: The goal is to provide a useful assessment of the obstacles associated with integrating DNA-based methods into aquatic invasive species management, and to offer recommendations for future efforts aimed at overcoming those obstacles.
••
TL;DR: In this paper, a structural VAR approach is proposed to identify news shocks about future technology, and the news shock is identified as the shock orthogonal to the innovation in current utilization-adjusted TFP that best explains variation in future TFP.
••
TL;DR: Characteristics of psychology that cross content domains and that make the field well suited for providing an understanding of climate change and addressing its challenges are highlighted and ethical imperatives for psychologists' involvement are considered.
Abstract: Global climate change poses one of the greatest challenges facing humanity in this century. This article, which introduces the American Psychologist special issue on global climate change, follows from the report of the American Psychological Association Task Force on the Interface Between Psychology and Global Climate Change. In this article, we place psychological dimensions of climate change within the broader context of human dimensions of climate change by addressing (a) human causes of, consequences of, and responses (adaptation and mitigation) to climate change and (b) the links between these aspects of climate change and cognitive, affective, motivational, interpersonal, and organizational responses and processes. Characteristics of psychology that cross content domains and that make the field well suited for providing an understanding of climate change and addressing its challenges are highlighted. We also consider ethical imperatives for psychologists' involvement and provide suggestions for ways to increase psychologists' contribution to the science of climate change.
••
TL;DR: The resulting measure, the Multidimensional Experiential Avoidance Questionnaire, or MEAQ, exhibited good internal consistency, was substantially associated with other measures of avoidance, and demonstrated greater discrimination vis-à-vis neuroticism relative to preexisting measures of EA.
Abstract: Experiential avoidance (EA) has been conceptualized as the tendency to avoid negative internal experiences and is an important concept in numerous conceptualizations of psychopathology as well as theories of psychotherapy. Existing measures of EA have either been narrowly defined or demonstrated unsatisfactory internal consistency and/or evidence of poor discriminant validity vis-a`-vis neuroticism. To help address these problems, we developed a reliable self-report questionnaire assessing a broad range of EA content that was distinguishable from higher order personality traits. An initial pool of 170 items was administered to a sample of undergraduates (N 312) to help evaluate individual items and establish a structure via exploratory factor analyses. A revised set of items was then administered to another sample of undergraduates (N 314) and a sample of psychiatric outpatients (N 201). A 2nd round of item evaluation was performed, resulting in a final 62-item measure consisting of 6 subscales. Cross-validation data were gathered in 3 new, independent samples (students, N 363; patients, N 265; community adults, N 215). The resulting measure (the Multidimensional Experiential Avoidance Questionnaire, or MEAQ) exhibited good internal consistency, was substantially associated with other measures of avoidance, and demonstrated greater discrimination vis-a`-vis neuroticism relative to preexisting measures of EA. Furthermore, the MEAQ was broadly associated with psychopathology and quality of life, even after controlling for the effects of neuroticism.