scispace - formally typeset
Search or ask a question

Showing papers by "University of Notre Dame published in 2011"


BookDOI
23 Oct 2011
TL;DR: The degree distribution, twopoint correlations, and clustering are the studied topological properties and an evolution of networks is studied to shed light on the influence the dynamics has on the network topology.
Abstract: Networks have become a general concept to model the structure of arbitrary relationships among entities. The framework of a network introduces a fundamentally new approach apart from ‘classical’ structures found in physics to model the topology of a system. In the context of networks fundamentally new topological effects can emerge and lead to a class of topologies which are termed ‘complex networks’. The concept of a network successfully models the static topology of an empirical system, an arbitrary model, and a physical system. Generally networks serve as a host for some dynamics running on it in order to fulfill a function. The question of the reciprocal relationship among a dynamical process on a network and its topology is the context of this Thesis. This context is studied in both directions. The network topology constrains or enhances the dynamics running on it, while the reciprocal interaction is of the same importance. Networks are commonly the result of an evolutionary process, e.g. protein interaction networks from biology. Within such an evolution the dynamics shapes the underlying network topology with respect to an optimal achievement of the function to perform. Answering the question what the influence on a dynamics of a particular topological property has requires the accurate control over the topological properties in question. In this Thesis the degree distribution, twopoint correlations, and clustering are the studied topological properties. These are motivated by the ubiquitous presence and importance within almost all empirical networks. An analytical framework to measure and to control such quantities of networks along with numerical algorithms to generate them is developed in a first step. Networks with the examined topological properties are then used to reveal their impact on two rather general dynamics on networks. Finally, an evolution of networks is studied to shed light on the influence the dynamics has on the network topology.

2,720 citations


Journal ArticleDOI
Norman A. Grogin1, Dale D. Kocevski2, Sandra M. Faber2, Henry C. Ferguson1, Anton M. Koekemoer1, Adam G. Riess3, Viviana Acquaviva4, David M. Alexander5, Omar Almaini6, Matthew L. N. Ashby7, Marco Barden8, Eric F. Bell9, Frédéric Bournaud10, Thomas M. Brown1, Karina Caputi11, Stefano Casertano1, Paolo Cassata12, Marco Castellano, Peter Challis7, Ranga-Ram Chary13, Edmond Cheung2, Michele Cirasuolo14, Christopher J. Conselice6, Asantha Cooray15, Darren J. Croton16, Emanuele Daddi10, Tomas Dahlen1, Romeel Davé17, Duilia F. de Mello18, Duilia F. de Mello19, Avishai Dekel20, Mark Dickinson, Timothy Dolch3, Jennifer L. Donley1, James Dunlop11, Aaron A. Dutton21, David Elbaz10, Giovanni G. Fazio7, Alexei V. Filippenko22, Steven L. Finkelstein23, Adriano Fontana, Jonathan P. Gardner18, Peter M. Garnavich24, Eric Gawiser4, Mauro Giavalisco12, Andrea Grazian, Yicheng Guo12, Nimish P. Hathi25, Boris Häussler6, Philip F. Hopkins22, Jiasheng Huang26, Kuang-Han Huang1, Kuang-Han Huang3, Saurabh Jha4, Jeyhan S. Kartaltepe, Robert P. Kirshner7, David C. Koo2, Kamson Lai2, Kyoung-Soo Lee27, Weidong Li22, Jennifer M. Lotz1, Ray A. Lucas1, Piero Madau2, Patrick J. McCarthy25, Elizabeth J. McGrath2, Daniel H. McIntosh28, Ross J. McLure11, Bahram Mobasher29, Leonidas A. Moustakas13, Mark Mozena2, Kirpal Nandra30, Jeffrey A. Newman31, Sami Niemi1, Kai G. Noeske1, Casey Papovich23, Laura Pentericci, Alexandra Pope12, Joel R. Primack2, Abhijith Rajan1, Swara Ravindranath32, Naveen A. Reddy29, Alvio Renzini, Hans-Walter Rix30, Aday R. Robaina33, Steven A. Rodney3, David J. Rosario30, Piero Rosati34, S. Salimbeni12, Claudia Scarlata35, Brian Siana29, Luc Simard36, Joseph Smidt15, Rachel S. Somerville4, Hyron Spinrad22, Amber Straughn18, Louis-Gregory Strolger37, Olivia Telford31, Harry I. Teplitz13, Jonathan R. Trump2, Arjen van der Wel30, Carolin Villforth1, Risa H. Wechsler38, Benjamin J. Weiner17, Tommy Wiklind39, Vivienne Wild11, Grant W. Wilson12, Stijn Wuyts30, Hao Jing Yan40, Min S. Yun12 
TL;DR: The Cosmic Assembly Near-IR Deep Extragalactic Legacy Survey (CANDELS) as discussed by the authors was designed to document the first third of galactic evolution, from z approx. 8 - 1.5 to test their accuracy as standard candles for cosmology.
Abstract: The Cosmic Assembly Near-IR Deep Extragalactic Legacy Survey (CANDELS) is designed to document the first third of galactic evolution, from z approx. 8 - 1.5. It will image > 250,000 distant galaxies using three separate cameras on the Hubble Space Tele8cope, from the mid-UV to near-IR, and will find and measure Type Ia supernovae beyond z > 1.5 to test their accuracy as standard candles for cosmology. Five premier multi-wavelength sky regions are selected, each with extensive ancillary data. The use of five widely separated fields mitigates cosmic variance and yields statistically robust and complete samples of galaxies down to a stellar mass of 10(exp 9) solar mass to z approx. 2, reaching the knee of the UV luminosity function of galaxies to z approx. 8. The survey covers approximately 800 square arc minutes and is divided into two parts. The CANDELS/Deep survey (5(sigma) point-source limit H =27.7mag) covers approx. 125 square arcminutes within GOODS-N and GOODS-S. The CANDELS/Wide survey includes GOODS and three additional fields (EGS, COSMOS, and UDS) and covers the full area to a 50(sigma) point-source limit of H ? or approx. = 27.0 mag. Together with the Hubble Ultradeep Fields, the strategy creates a three-tiered "wedding cake" approach that has proven efficient for extragalactic surveys. Data from the survey are non-proprietary and are useful for a wide variety of science investigations. In this paper, we describe the basic motivations for the survey, the CANDELS team science goals and the resulting observational requirements, the field selection and geometry, and the observing design.

2,088 citations


Journal ArticleDOI
Anton M. Koekemoer1, Sandra M. Faber2, Henry C. Ferguson1, Norman A. Grogin1, Dale D. Kocevski2, David C. Koo2, Kamson Lai2, Jennifer M. Lotz1, Ray A. Lucas1, Elizabeth J. McGrath2, Sara Ogaz1, Abhijith Rajan1, Adam G. Riess3, S. Rodney3, L. G. Strolger4, Stefano Casertano1, Marco Castellano, Tomas Dahlen1, Mark Dickinson, Timothy Dolch3, Adriano Fontana, Mauro Giavalisco5, Andrea Grazian, Yicheng Guo5, Nimish P. Hathi6, Kuang-Han Huang3, Kuang-Han Huang1, Arjen van der Wel7, Hao Jing Yan8, Viviana Acquaviva9, David M. Alexander10, Omar Almaini11, Matthew L. N. Ashby12, Marco Barden13, Eric F. Bell14, Frédéric Bournaud15, Thomas M. Brown1, Karina Caputi16, Paolo Cassata5, Peter Challis17, Ranga-Ram Chary18, Edmond Cheung2, Michele Cirasuolo16, Christopher J. Conselice11, Asantha Cooray19, Darren J. Croton20, Emanuele Daddi15, Romeel Davé21, Duilia F. de Mello22, Loic de Ravel16, Avishai Dekel23, Jennifer L. Donley1, James Dunlop16, Aaron A. Dutton24, David Elbaz25, Giovanni Fazio12, Alexei V. Filippenko26, Steven L. Finkelstein27, Chris Frazer19, Jonathan P. Gardner22, Peter M. Garnavich28, Eric Gawiser9, Ruth Gruetzbauch11, Will G. Hartley11, B. Haussler11, Jessica Herrington14, Philip F. Hopkins26, J.-S. Huang29, Saurabh Jha9, Andrew Johnson2, Jeyhan S. Kartaltepe3, Ali Ahmad Khostovan19, Robert P. Kirshner12, Caterina Lani11, Kyoung-Soo Lee30, Weidong Li26, Piero Madau2, Patrick J. McCarthy6, Daniel H. McIntosh31, Ross J. McLure, Conor McPartland2, Bahram Mobasher32, Heidi Moreira9, Alice Mortlock11, Leonidas A. Moustakas18, Mark Mozena2, Kirpal Nandra33, Jeffrey A. Newman34, Jennifer L. Nielsen31, Sami Niemi1, Kai G. Noeske1, Casey Papovich27, Laura Pentericci, Alexandra Pope, Joel R. Primack2, Swara Ravindranath35, Naveen A. Reddy, Alvio Renzini, Hans Walter Rix7, Aday R. Robaina, David J. Rosario2, Piero Rosati7, S. Salimbeni5, Claudia Scarlata18, Brian Siana18, Luc Simard36, Joseph Smidt19, D. Snyder2, Rachel S. Somerville1, Hyron Spinrad26, Amber N. Straughn22, Olivia Telford34, Harry I. Teplitz18, Jonathan R. Trump2, Carlos J. Vargas9, Carolin Villforth1, C. Wagner31, P. Wandro2, Risa H. Wechsler37, Benjamin J. Weiner21, Tommy Wiklind1, Vivienne Wild, Grant W. Wilson5, Stijn Wuyts12, Min S. Yun5 
TL;DR: In this paper, the authors describe the Hubble Space Telescope imaging data products and data reduction procedures for the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS).
Abstract: This paper describes the Hubble Space Telescope imaging data products and data reduction procedures for the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS). This survey is designed to document the evolution of galaxies and black holes at z 1.5-8, and to study Type Ia supernovae at z > 1.5. Five premier multi-wavelength sky regions are selected, each with extensive multi-wavelength observations. The primary CANDELS data consist of imaging obtained in the Wide Field Camera 3 infrared channel (WFC3/IR) and the WFC3 ultraviolet/optical channel, along with the Advanced Camera for Surveys (ACS). The CANDELS/Deep survey covers ~125 arcmin2 within GOODS-N and GOODS-S, while the remainder consists of the CANDELS/Wide survey, achieving a total of ~800 arcmin2 across GOODS and three additional fields (Extended Groth Strip, COSMOS, and Ultra-Deep Survey). We summarize the observational aspects of the survey as motivated by the scientific goals and present a detailed description of the data reduction procedures and products from the survey. Our data reduction methods utilize the most up-to-date calibration files and image combination procedures. We have paid special attention to correcting a range of instrumental effects, including charge transfer efficiency degradation for ACS, removal of electronic bias-striping present in ACS data after Servicing Mission 4, and persistence effects and other artifacts in WFC3/IR. For each field, we release mosaics for individual epochs and eventual mosaics containing data from all epochs combined, to facilitate photometric variability studies and the deepest possible photometry. A more detailed overview of the science goals and observational design of the survey are presented in a companion paper.

2,011 citations


Journal ArticleDOI
TL;DR: This Review highlights mechanisms that have evolved in microorganisms to allow them to successfully enter and exit a dormant state, and discusses the implications of microbial seed banks for evolutionary dynamics, population persistence, maintenance of biodiversity, and the stability of ecosystem processes.
Abstract: Dormancy is a bet-hedging strategy used by a wide range of taxa, including microorganisms. It refers to an organism's ability to enter a reversible state of low metabolic activity when faced with unfavourable environmental conditions. Dormant microorganisms generate a seed bank, which comprises individuals that are capable of being resuscitated following environmental change. In this Review, we highlight mechanisms that have evolved in microorganisms to allow them to successfully enter and exit a dormant state, and discuss the implications of microbial seed banks for evolutionary dynamics, population persistence, maintenance of biodiversity, and the stability of ecosystem processes.

1,399 citations


Journal ArticleDOI
TL;DR: A new freshwater lake phylogeny constructed from all published 16S rRNA gene sequences from lake epilimnia is presented and a unifying vocabulary to discuss freshwater taxa is proposed, providing a coherent framework for future studies.
Abstract: Freshwater bacteria are at the hub of biogeochemical cycles and control water quality in lakes. Despite this, little is known about the identity and ecology of functionally significant lake bacteria. Molecular studies have identified many abundant lake bacteria, but there is a large variation in the taxonomic or phylogenetic breadths among the methods used for this exploration. Because of this, an inconsistent and overlapping naming structure has developed for freshwater bacteria, creating a significant obstacle to identifying coherent ecological traits among these groups. A discourse that unites the field is sorely needed. Here we present a new freshwater lake phylogeny constructed from all published 16S rRNA gene sequences from lake epilimnia and propose a unifying vocabulary to discuss freshwater taxa. With this new vocabulary in place, we review the current information on the ecology, ecophysiology, and distribution of lake bacteria and highlight newly identified phylotypes. In the second part of our review, we conduct meta-analyses on the compiled data, identifying distribution patterns for bacterial phylotypes among biomes and across environmental gradients in lakes. We conclude by emphasizing the role that this review can play in providing a coherent framework for future studies.

1,230 citations


Posted Content
01 Jan 2011
TL;DR: This article explains what adjusted predictions and marginal effects are, and how they can contribute to the interpretation of results, and shows how the marginsplot command provides a graphical and often much easier means for presenting and understanding the results from margins.
Abstract: As Long & Freese show, it can often be helpful to compute predicted/expected values for hypothetical or prototypical cases. Stata 11 introduced new tools for making such calculations – factor variables and the margins command. These can do many of the things that were previously done by Stata’s own adjust and mfx commands, as well as Long & Freese’s spost9 commands like prvalue. Unfortunately, the complexity of the margins syntax, the daunting 50 page reference manual entry that describes it, and a lack of understanding about what margins offers over older commands may have dissuaded researchers from using it. This paper therefore shows how margins can easily replicate analyses done by older commands. It demonstrates how margins provides a superior means for dealing with interdependent variables (e.g. X and X^2; X1, X2, and X1 * X2; multiple dummies created from a single categorical variable), and is also superior for data that are svyset. The paper explains how the new asobserved option works and the substantive reasons for preferring it over the atmeans approach used by older commands. The paper primarily focuses on the computation of adjusted predictions but also shows how margins has the same advantages for computing marginal effects.

1,228 citations



Journal ArticleDOI
John K. Colbourne1, Michael E. Pfrender2, Michael E. Pfrender3, Donald L. Gilbert1, W. Kelley Thomas4, Abraham E. Tucker1, Abraham E. Tucker4, Todd H. Oakley5, Shin-ichi Tokishita6, Andrea Aerts7, Georg J. Arnold8, Malay Kumar Basu9, Malay Kumar Basu10, Darren J Bauer4, Carla E. Cáceres11, Liran Carmel10, Liran Carmel12, Claudio Casola1, Jeong Hyeon Choi1, John C. Detter7, Qunfeng Dong1, Qunfeng Dong13, Serge Dusheyko7, Brian D. Eads1, Thomas Fröhlich8, Kerry Geiler-Samerotte5, Kerry Geiler-Samerotte14, Daniel Gerlach15, Daniel Gerlach16, Phil Hatcher4, Sanjuro Jogdeo17, Sanjuro Jogdeo4, Jeroen Krijgsveld18, Evgenia V. Kriventseva16, Dietmar Kültz19, Christian Laforsch8, Erika Lindquist7, Jacqueline Lopez1, J. Robert Manak20, J. Robert Manak21, Jean Muller22, Jasmyn Pangilinan7, Rupali P Patwardhan1, Rupali P Patwardhan23, Samuel Pitluck7, Ellen J. Pritham24, Andreas Rechtsteiner1, Andreas Rechtsteiner25, Mina Rho1, Igor B. Rogozin10, Onur Sakarya26, Onur Sakarya5, Asaf Salamov7, Sarah Schaack24, Sarah Schaack1, Harris Shapiro7, Yasuhiro Shiga6, Courtney Skalitzky20, Zachary Smith1, Alexander Souvorov10, Way Sung4, Zuojian Tang1, Zuojian Tang27, Dai Tsuchiya1, Hank Tu7, Hank Tu26, Harmjan R. Vos18, Mei Wang7, Yuri I. Wolf10, Hideo Yamagata6, Takuji Yamada, Yuzhen Ye1, Joseph R. Shaw1, Justen Andrews1, Teresa J. Crease28, Haixu Tang1, Susan Lucas7, Hugh M. Robertson11, Peer Bork, Eugene V. Koonin10, Evgeny M. Zdobnov16, Evgeny M. Zdobnov29, Igor V. Grigoriev7, Michael Lynch1, Jeffrey L. Boore7, Jeffrey L. Boore30 
04 Feb 2011-Science
TL;DR: The Daphnia genome reveals a multitude of genes and shows adaptation through gene family expansions, and the coexpansion of gene families interacting within metabolic pathways suggests that the maintenance of duplicated genes is not random.
Abstract: We describe the draft genome of the microcrustacean Daphnia pulex, which is only 200 megabases and contains at least 30,907 genes. The high gene count is a consequence of an elevated rate of gene duplication resulting in tandem gene clusters. More than a third of Daphnia's genes have no detectable homologs in any other available proteome, and the most amplified gene families are specific to the Daphnia lineage. The coexpansion of gene families interacting within metabolic pathways suggests that the maintenance of duplicated genes is not random, and the analysis of gene expression under different environmental conditions reveals that numerous paralogs acquire divergent expression patterns soon after duplication. Daphnia-specific genes, including many additional loci within sequenced regions that are otherwise devoid of annotations, are the most responsive genes to ecological challenges.

1,204 citations


Journal ArticleDOI
TL;DR: A distributed event-triggering scheme, where a subsystem broadcasts its state information to its neighbors only when the subsystem's local state error exceeds a specified threshold, is proposed, which is able to make broadcast decisions using its locally sampled data.
Abstract: This paper examines event-triggered data transmission in distributed networked control systems with packet loss and transmission delays. We propose a distributed event-triggering scheme, where a subsystem broadcasts its state information to its neighbors only when the subsystem's local state error exceeds a specified threshold. In this scheme, a subsystem is able to make broadcast decisions using its locally sampled data. It can also locally predict the maximal allowable number of successive data dropouts (MANSD) and the state-based deadlines for transmission delays. Moreover, the designer's selection of the local event for a subsystem only requires information on that individual subsystem. Our analysis applies to both linear and nonlinear subsystems. Designing local events for a nonlinear subsystem requires us to find a controller that ensures that subsystem to be input-to-state stable. For linear subsystems, the design problem becomes a linear matrix inequality feasibility problem. With the assumption that the number of each subsystem's successive data dropouts is less than its MANSD, we show that if the transmission delays are zero, the resulting system is finite-gain Lp stable. If the delays are bounded by given deadlines, the system is asymptotically stable. We also show that those state-based deadlines for transmission delays are always greater than a positive constant.

1,134 citations


Journal ArticleDOI
TL;DR: Quantitative comparisons with traditional fisheries surveillance tools illustrate the greater sensitivity of eDNA and reveal that the risk of invasion to the Laurentian Great Lakes is imminent.
Abstract: Effective management of rare species, including endangered native species and recently introduced nonindigenous species, requires the detection of populations at low density. For endangered species, detecting the localized distribution makes it possible to identify and protect critical habitat to enhance survival or reproductive success. Similarly, early detection of an incipient invasion by a harmful species increases the feasibility of rapid responses to eradicate the species or contain its spread. Here we demonstrate the efficacy of environmental DNA (eDNA) as a detection tool in freshwater environments. Specifically, we delimit the invasion fronts of two species of Asian carps in Chicago, Illinois, USA area canals and waterways. Quantitative comparisons with traditional fisheries surveillance tools illustrate the greater sensitivity of eDNA and reveal that the risk of invasion to the Laurentian Great Lakes is imminent.

965 citations


Journal ArticleDOI
TL;DR: To explore how the problem of antibiotic resistance might best be addressed, a group of 30 scientists from academia and industry gathered at the Banbury Conference Centre in Cold Spring Harbor, New York, USA, from 16 to 18 May 2011.
Abstract: The development and spread of antibiotic resistance in bacteria is a universal threat to both humans and animals that is generally not preventable but can nevertheless be controlled, and it must be tackled in the most effective ways possible. To explore how the problem of antibiotic resistance might best be addressed, a group of 30 scientists from academia and industry gathered at the Banbury Conference Centre in Cold Spring Harbor, New York, USA, from 16 to 18 May 2011. From these discussions there emerged a priority list of steps that need to be taken to resolve this global crisis.

Journal ArticleDOI
TL;DR: Cross-sectional analyses can imply the existence of a substantial indirect effect even when the true longitudinal indirect effect is zero, and a variable that is found to be a strong mediator in a cross-sectional analysis may not be a mediator at all in a longitudinal analysis.
Abstract: Maxwell and Cole (2007) showed that cross-sectional approaches to mediation typically generate substantially biased estimates of longitudinal parameters in the special case of complete mediation. However, their results did not apply to the more typical case of partial mediation. We extend their previous work by showing that substantial bias can also occur with partial mediation. In particular, cross-sectional analyses can imply the existence of a substantial indirect effect even when the true longitudinal indirect effect is zero. Thus, a variable that is found to be a strong mediator in a cross-sectional analysis may not be a mediator at all in a longitudinal analysis. In addition, we show that very different combinations of longitudinal parameter values can lead to essentially identical cross-sectional correlations, raising serious questions about the interpretability of cross-sectional mediation data. More generally, researchers are encouraged to consider a wide variety of possible mediation models beyond simple cross-sectional models, including but not restricted to autoregressive models of change.

Journal ArticleDOI
S. Chatrchyan, Vardan Khachatryan, Albert M. Sirunyan, A. Tumasyan  +2268 moreInstitutions (158)
TL;DR: In this article, the transverse momentum balance in dijet and γ/Z+jets events is used to measure the jet energy response in the CMS detector, as well as the transversal momentum resolution.
Abstract: Measurements of the jet energy calibration and transverse momentum resolution in CMS are presented, performed with a data sample collected in proton-proton collisions at a centre-of-mass energy of 7TeV, corresponding to an integrated luminosity of 36pb−1. The transverse momentum balance in dijet and γ/Z+jets events is used to measure the jet energy response in the CMS detector, as well as the transverse momentum resolution. The results are presented for three different methods to reconstruct jets: a calorimeter-based approach, the ``Jet-Plus-Track'' approach, which improves the measurement of calorimeter jets by exploiting the associated tracks, and the ``Particle Flow'' approach, which attempts to reconstruct individually each particle in the event, prior to the jet clustering, based on information from all relevant subdetectors

Posted Content
TL;DR: In this paper, the authors developed a bid-ask spread estimator from daily high and low prices, which can be applied in a variety of research areas, and generally outperforms other low-frequency estimators.
Abstract: We develop a bid-ask spread estimator from daily high and low prices. Daily high (low) prices are almost always buy (sell) trades. Hence, the high-low ratio reflects both the stock’s variance and its bid-ask spread. While the variance component of the high-low ratio is proportional to the return interval, the spread component is not. This allows us to derive a spread estimator as a function of high-low ratios over one-day and two-day intervals. The estimator is easy to calculate, can be applied in a variety of research areas, and generally outperforms other low-frequency estimators.

Journal ArticleDOI
TL;DR: The Malaria Eradication Research Agenda initiative and the set of articles published in this PLoS Medicine Supplement that distill the research questions key to malaria eradication are introduced.
Abstract: The interruption of malaria transmission worldwide is one of the greatest challenges for international health and development communities. The current expert view suggests that, by aggressively scaling up control with currently available tools and strategies, much greater gains could be achieved against malaria, including elimination from a number of countries and regions; however, even with maximal effort we will fall short of global eradication. The Malaria Eradication Research Agenda (malERA) complements the current research agenda—primarily directed towards reducing morbidity and mortality—with one that aims to identify key knowledge gaps and define the strategies and tools that will result in reducing the basic reproduction rate to less than 1, with the ultimate aim of eradication of the parasite from the human population. Sustained commitment from local communities, civil society, policy leaders, and the scientific community, together with a massive effort to build a strong base of researchers from the endemic areas will be critical factors in the success of this new agenda.

Journal ArticleDOI
TL;DR: In this article, the authors studied the effect of collision centrality on the transverse momentum of PbPb collisions at the LHC with a data sample of 6.7 inverse microbarns.
Abstract: Jet production in PbPb collisions at a nucleon-nucleon center-of-mass energy of 2.76 TeV was studied with the CMS detector at the LHC, using a data sample corresponding to an integrated luminosity of 6.7 inverse microbarns. Jets are reconstructed using the energy deposited in the CMS calorimeters and studied as a function of collision centrality. With increasing collision centrality, a striking imbalance in dijet transverse momentum is observed, consistent with jet quenching. The observed effect extends from the lower cut-off used in this study (jet transverse momentum = 120 GeV/c) up to the statistical limit of the available data sample (jet transverse momentum approximately 210 GeV/c). Correlations of charged particle tracks with jets indicate that the momentum imbalance is accompanied by a softening of the fragmentation pattern of the second most energetic, away-side jet. The dijet momentum balance is recovered when integrating low transverse momentum particles distributed over a wide angular range relative to the direction of the away-side jet.

Journal ArticleDOI
TL;DR: Graphene-based assemblies are gaining attention as a viable alternate to boost the efficiency of various catalytic and storage reactions in energy conversion applications as discussed by the authors, and the use of reduced graphene oxide has already proved useful in collecting and transporting charge in photoelectrochemical solar cells, photocatalysis, and electrocatalysis.
Abstract: Graphene-based assemblies are gaining attention as a viable alternate to boost the efficiency of various catalytic and storage reactions in energy conversion applications. The use of reduced graphene oxide has already proved useful in collecting and transporting charge in photoelectrochemical solar cells, photocatalysis, and electrocatalysis. In many of these applications, the flat carbon serves as a scaffold to anchor metal and semiconductor nanoparticles and assists in promoting selectivity and efficiency of the catalytic process. Covalent and noncovalent interaction with organic molecules is another area that is expected to provide new frontiers in graphene research. Recent advances in manipulating graphene-based two-dimensional carbon architecture for energy conversion are described.

Journal ArticleDOI
TL;DR: Interestingly, the films which exhibited the fastest electron transfer rates were not the same as those which showed the highest photocurrent, suggesting that, in addition to electron transfer at the quantum dot-metal oxide interface, other electron transfer reactions play key roles in the determination of overall device efficiency.
Abstract: Quantum dot-metal oxide junctions are an integral part of next-generation solar cells, light emitting diodes, and nanostructured electronic arrays. Here we present a comprehensive examination of electron transfer at these junctions, using a series of CdSe quantum dot donors (sizes 2.8, 3.3, 4.0, and 4.2 nm in diameter) and metal oxide nanoparticle acceptors (SnO2, TiO2, and ZnO). Apparent electron transfer rate constants showed strong dependence on change in system free energy, exhibiting a sharp rise at small driving forces followed by a modest rise further away from the characteristic reorganization energy. The observed trend mimics the predicted behavior of electron transfer from a single quantum state to a continuum of electron accepting states, such as those present in the conduction band of a metal oxide nanoparticle. In contrast with dye-sensitized metal oxide electron transfer studies, our systems did not exhibit unthermalized hot-electron injection due to relatively large ratios of electron cooling rate to electron transfer rate. To investigate the implications of these findings in photovoltaic cells, quantum dot-metal oxide working electrodes were constructed in an identical fashion to the films used for the electron transfer portion of the study. Interestingly, the films which exhibited the fastest electron transfer rates (SnO2) were not the same as those which showed the highest photocurrent (TiO2). These findings suggest that, in addition to electron transfer at the quantum dot-metal oxide interface, other electron transfer reactions play key roles in the determination of overall device efficiency.

Journal ArticleDOI
TL;DR: The available data on nuclear fusion cross sections important to energy generation in the Sun and other hydrogen-burning stars and to solar neutrino production are summarized and critically evaluated in this article.
Abstract: The available data on nuclear fusion cross sections important to energy generation in the Sun and other hydrogen-burning stars and to solar neutrino production are summarized and critically evaluated. Recommended values and uncertainties are provided for key cross sections, and a recommended spectrum is given for {sup 8}B solar neutrinos. Opportunities for further increasing the precision of key rates are also discussed, including new facilities, new experimental techniques, and improvements in theory. This review, which summarizes the conclusions of a workshop held at the Institute for Nuclear Theory, Seattle, in January 2009, is intended as a 10-year update and supplement to 1998, Rev. Mod. Phys. 70, 1265.

Journal ArticleDOI
TL;DR: In this article, the authors acknowledge the partial support of the National Science Foundation Graduate Fellowship and the National Defense Science and Engineering Graduate Fellowship for a research grant from King Abdullah University of Science and Technology (KAUST) and Stanford University.
Abstract: The first author acknowledges the partial support by a National Science Foundation Graduate Fellowship and the partial support by a National Defense Science and Engineering Graduate Fellowship. The second and third authors acknowledge the partial support by the Motor Sports Division of the Toyota Motor Corporation under Agreement Number 48737, and the partial support by a research grant from the Academic Excellence Alliance program between King Abdullah University of Science and Technology (KAUST) and Stanford University. All authors also acknowledge the constructive comments received during the review process.

Journal ArticleDOI
TL;DR: It is found that stream denitrification produces N2O at rates that increase with stream water nitrate (NO3−) concentrations, but that <1% of denitrified N is converted to N1O, and it is suggested that increased stream NO3− loading stimulatesDenitrification and concomitant N2o production, but does not increase the N2 O yield.
Abstract: Nitrous oxide (N2O) is a potent greenhouse gas that contributes to climate change and stratospheric ozone destruction. Anthropogenic nitrogen (N) loading to river networks is a potentially important source of N2O via microbial denitrification that converts N to N2O and dinitrogen (N2). The fraction of denitrified N that escapes as N2O rather than N2 (i.e., the N2O yield) is an important determinant of how much N2O is produced by river networks, but little is known about the N2O yield in flowing waters. Here, we present the results of whole-stream 15N-tracer additions conducted in 72 headwater streams draining multiple land-use types across the United States. We found that stream denitrification produces N2O at rates that increase with stream water nitrate (NO3−) concentrations, but that <1% of denitrified N is converted to N2O. Unlike some previous studies, we found no relationship between the N2O yield and stream water NO3−. We suggest that increased stream NO3− loading stimulates denitrification and concomitant N2O production, but does not increase the N2O yield. In our study, most streams were sources of N2O to the atmosphere and the highest emission rates were observed in streams draining urban basins. Using a global river network model, we estimate that microbial N transformations (e.g., denitrification and nitrification) convert at least 0.68 Tg·y−1 of anthropogenic N inputs to N2O in river networks, equivalent to 10% of the global anthropogenic N2O emission rate. This estimate of stream and river N2O emissions is three times greater than estimated by the Intergovernmental Panel on Climate Change.

Journal ArticleDOI
TL;DR: Kim et al. as discussed by the authors presented a non-nucleophilic electrolyte from hexamethyldisilazide magnesium chloride and aluminium trichloride, and showed its compatibility with a sulphur cathode.
Abstract: Magnesium is an ideal rechargeable battery anode material, but coupling it with a low-cost sulphur cathode, requires a non-nucleophilic electrolyte. Kim et al. prepare a non-nucleophilic electrolyte from hexamethyldisilazide magnesium chloride and aluminium trichloride, and show its compatibility with a sulphur cathode.

Journal ArticleDOI
TL;DR: In this article, the authors argue for an approach to conceptualize and measure regimes such that meaningful comparisons can be made through time and across countries, and review some of the payoffs such an approach might bring to the study of democracy.
Abstract: InthewakeoftheColdWar,democracyhasgainedthestatusofamantra.Yetthereisnoconsensusabouthowtoconceptualizeand measure regimes such that meaningful comparisons can be made through time and across countries. In this prescriptive article, we argueforanewapproachtoconceptualizationandmeasurement.Wefirstreviewsomeoftheweaknessesamongtraditionalapproaches. Wethenlayoutourapproach,whichmaybecharacterizedas historical, multidimensional, disaggregated,and transparent.Weendby reviewing some of the payoffs such an approach might bring to the study of democracy.

Journal ArticleDOI
TL;DR: The resulting integrated SWAN + ADCIRC system is highly scalable and allows for localized increases in resolution without the complexity or cost of nested meshes or global interpolation between heterogeneous meshes.

Journal ArticleDOI
03 Jan 2011-Small
TL;DR: A wide range of promising laboratory and consumer biotechnological applications from microscale genetic and proteomic analysis kits, cell culture and manipulation platforms, biosensors, and pathogen detection systems to point-of-care diagnostic devices, high-throughput combinatorial drug screening platforms, schemes for targeted drug delivery and advanced therapeutics, and novel biomaterials synthesis for tissue engineering are reviewed.
Abstract: Harnessing the ability to precisely and reproducibly actuate fluids and manipulate bioparticles such as DNA, cells, and molecules at the microscale, microfluidics is a powerful tool that is currently revolutionizing chemical and biological analysis by replicating laboratory bench-top technology on a miniature chip-scale device, thus allowing assays to be carried out at a fraction of the time and cost while affording portability and field-use capability. Emerging from a decade of research and development in microfluidic technology are a wide range of promising laboratory and consumer biotechnological applications from microscale genetic and proteomic analysis kits, cell culture and manipulation platforms, biosensors, and pathogen detection systems to point-of-care diagnostic devices, high-throughput combinatorial drug screening platforms, schemes for targeted drug delivery and advanced therapeutics, and novel biomaterials synthesis for tissue engineering. The developments associated with these technological advances along with their respective applications to date are reviewed from a broad perspective and possible future directions that could arise from the current state of the art are discussed.

Journal ArticleDOI
TL;DR: Reduced graphene oxide (RGO)-Cu2S composite has now succeeded in shuttling electrons through the RGO sheets and polysulfide-active Cu2S more efficiently than Pt electrode, improving the fill factor by ∼75%.
Abstract: Polysulfide electrolyte that is employed as a redox electrolyte in quantum dot sensitized solar cells provides stability to the cadmium chalcogenide photoanode but introduces significant redox limitations at the counter electrode through undesirable surface reactions. By designing reduced graphene oxide (RGO)-Cu2S composite, we have now succeeded in shuttling electrons through the RGO sheets and polysulfide-active Cu2S more efficiently than Pt electrode, improving the fill factor by ∼75%. The composite material characterized and optimized at different compositions indicates a Cu/RGO mass ratio of 4 provides the best electrochemical performance. A sandwich CdSe quantum dot sensitized solar cell constructed using the optimized RGO-Cu2S composite counter electrode exhibited an unsurpassed power conversion efficiency of 4.4%.

Journal ArticleDOI
TL;DR: The goal is to provide a useful assessment of the obstacles associated with integrating DNA-based methods into aquatic invasive species management, and to offer recommendations for future efforts aimed at overcoming those obstacles.

Journal ArticleDOI
TL;DR: In this paper, a structural VAR approach is proposed to identify news shocks about future technology, and the news shock is identified as the shock orthogonal to the innovation in current utilization-adjusted TFP that best explains variation in future TFP.

Journal ArticleDOI
TL;DR: Characteristics of psychology that cross content domains and that make the field well suited for providing an understanding of climate change and addressing its challenges are highlighted and ethical imperatives for psychologists' involvement are considered.
Abstract: Global climate change poses one of the greatest challenges facing humanity in this century. This article, which introduces the American Psychologist special issue on global climate change, follows from the report of the American Psychological Association Task Force on the Interface Between Psychology and Global Climate Change. In this article, we place psychological dimensions of climate change within the broader context of human dimensions of climate change by addressing (a) human causes of, consequences of, and responses (adaptation and mitigation) to climate change and (b) the links between these aspects of climate change and cognitive, affective, motivational, interpersonal, and organizational responses and processes. Characteristics of psychology that cross content domains and that make the field well suited for providing an understanding of climate change and addressing its challenges are highlighted. We also consider ethical imperatives for psychologists' involvement and provide suggestions for ways to increase psychologists' contribution to the science of climate change.

Journal ArticleDOI
TL;DR: The resulting measure, the Multidimensional Experiential Avoidance Questionnaire, or MEAQ, exhibited good internal consistency, was substantially associated with other measures of avoidance, and demonstrated greater discrimination vis-à-vis neuroticism relative to preexisting measures of EA.
Abstract: Experiential avoidance (EA) has been conceptualized as the tendency to avoid negative internal experiences and is an important concept in numerous conceptualizations of psychopathology as well as theories of psychotherapy. Existing measures of EA have either been narrowly defined or demonstrated unsatisfactory internal consistency and/or evidence of poor discriminant validity vis-a`-vis neuroticism. To help address these problems, we developed a reliable self-report questionnaire assessing a broad range of EA content that was distinguishable from higher order personality traits. An initial pool of 170 items was administered to a sample of undergraduates (N 312) to help evaluate individual items and establish a structure via exploratory factor analyses. A revised set of items was then administered to another sample of undergraduates (N 314) and a sample of psychiatric outpatients (N 201). A 2nd round of item evaluation was performed, resulting in a final 62-item measure consisting of 6 subscales. Cross-validation data were gathered in 3 new, independent samples (students, N 363; patients, N 265; community adults, N 215). The resulting measure (the Multidimensional Experiential Avoidance Questionnaire, or MEAQ) exhibited good internal consistency, was substantially associated with other measures of avoidance, and demonstrated greater discrimination vis-a`-vis neuroticism relative to preexisting measures of EA. Furthermore, the MEAQ was broadly associated with psychopathology and quality of life, even after controlling for the effects of neuroticism.