scispace - formally typeset
Search or ask a question

Showing papers by "University of St Andrews published in 2013"


Journal ArticleDOI
S. Hong Lee1, Stephan Ripke2, Stephan Ripke3, Benjamin M. Neale2  +402 moreInstitutions (124)
TL;DR: Empirical evidence of shared genetic etiology for psychiatric disorders can inform nosology and encourages the investigation of common pathophysiologies for related disorders.
Abstract: Most psychiatric disorders are moderately to highly heritable. The degree to which genetic variation is unique to individual disorders or shared across disorders is unclear. To examine shared genetic etiology, we use genome-wide genotype data from the Psychiatric Genomics Consortium (PGC) for cases and controls in schizophrenia, bipolar disorder, major depressive disorder, autism spectrum disorders (ASD) and attention-deficit/hyperactivity disorder (ADHD). We apply univariate and bivariate methods for the estimation of genetic variation within and covariation between disorders. SNPs explained 17-29% of the variance in liability. The genetic correlation calculated using common SNPs was high between schizophrenia and bipolar disorder (0.68 ± 0.04 s.e.), moderate between schizophrenia and major depressive disorder (0.43 ± 0.06 s.e.), bipolar disorder and major depressive disorder (0.47 ± 0.06 s.e.), and ADHD and major depressive disorder (0.32 ± 0.07 s.e.), low between schizophrenia and ASD (0.16 ± 0.06 s.e.) and non-significant for other pairs of disorders as well as between psychiatric disorders and the negative control of Crohn's disease. This empirical evidence of shared genetic etiology for psychiatric disorders can inform nosology and encourages the investigation of common pathophysiologies for related disorders.

2,058 citations


Journal ArticleDOI
TL;DR: A perspective on the context and evolutionary significance of hybridization during speciation is offered, highlighting issues of current interest and debate and suggesting that the Dobzhansky–Muller model of hybrid incompatibilities requires a broader interpretation.
Abstract: Hybridization has many and varied impacts on the process of speciation. Hybridization may slow or reverse differentiation by allowing gene flow and recombination. It may accelerate speciation via adaptive introgression or cause near-instantaneous speciation by allopolyploidization. It may have multiple effects at different stages and in different spatial contexts within a single speciation event. We offer a perspective on the context and evolutionary significance of hybridization during speciation, highlighting issues of current interest and debate. In secondary contact zones, it is uncertain if barriers to gene flow will be strengthened or broken down due to recombination and gene flow. Theory and empirical evidence suggest the latter is more likely, except within and around strongly selected genomic regions. Hybridization may contribute to speciation through the formation of new hybrid taxa, whereas introgression of a few loci may promote adaptive divergence and so facilitate speciation. Gene regulatory networks, epigenetic effects and the evolution of selfish genetic material in the genome suggest that the Dobzhansky-Muller model of hybrid incompatibilities requires a broader interpretation. Finally, although the incidence of reinforcement remains uncertain, this and other interactions in areas of sympatry may have knock-on effects on speciation both within and outside regions of hybridization.

1,715 citations


Journal ArticleDOI
TL;DR: Analyzing carbon cathodes, cycled in Li-O(2) cells between 2 and 4 V, using acid treatment and Fenton's reagent, and combined with differential electrochemical mass spectrometry and FTIR demonstrates the following: Carbon is relatively stable below 3.5 V, but is unstable on charging above 3.
Abstract: Carbon has been used widely as the basis of porous cathodes for nonaqueous Li–O2 cells. However, the stability of carbon and the effect of carbon on electrolyte decomposition in such cells are complex and depend on the hydrophobicity/hydrophilicity of the carbon surface. Analyzing carbon cathodes, cycled in Li–O2 cells between 2 and 4 V, using acid treatment and Fenton’s reagent, and combined with differential electrochemical mass spectrometry and FTIR, demonstrates the following: Carbon is relatively stable below 3.5 V (vs Li/Li+) on discharge or charge, especially so for hydrophobic carbon, but is unstable on charging above 3.5 V (in the presence of Li2O2), oxidatively decomposing to form Li2CO3. Direct chemical reaction with Li2O2 accounts for only a small proportion of the total carbon decomposition on cycling. Carbon promotes electrolyte decomposition during discharge and charge in a Li–O2 cell, giving rise to Li2CO3 and Li carboxylates (DMSO and tetraglyme electrolytes). The Li2CO3 and Li carboxylat...

1,124 citations


Journal ArticleDOI
TL;DR: A number of metrics have been explored that allow the electronic properties of NHCs to be quantified and compared and what they can teach about the electronic Properties of N HCs are discussed.
Abstract: The use of N-heterocyclic carbenes (NHCs) in chemistry has developed rapidly over the past twenty years. These interesting compounds are predominantly employed in organometallic chemistry as ligands for various metal centres, and as organocatalysts able to mediate an exciting range of reactions. However, the sheer number of NHCs known in the literature can make the appropriate choice of NHC for a given application difficult. A number of metrics have been explored that allow the electronic properties of NHCs to be quantified and compared. In this review, we discuss these various metrics and what they can teach about the electronic properties of NHCs. Data for approximately three hundred NHCs are presented, obtained from a detailed survey of the literature.

839 citations


Journal ArticleDOI
TL;DR: This review aims to provide a comprehensive overview of current scientific knowledge on FCR and to formulate recommendations for future research to stimulate the research and the development of targeted interventions for cancer survivors and their carers.
Abstract: Purpose Fear of cancer recurrence (FCR) is among the most commonly reported problems and one of the most prevalent areas of unmet needs for cancer survivors and their carers. This review aims to provide a comprehensive overview of current scientific knowledge on FCR and to formulate recommendations for future research.

814 citations


Journal ArticleDOI
TL;DR: This work shows that incorporation of a redox mediator, tetrathiafulvalene (TTF), enables recharging at rates that are impossible for the cell in the absence of the mediator.
Abstract: The non-aqueous Li-air (O2) battery is receiving intense interest because its theoretical specific energy exceeds that of Li-ion batteries. Recharging the Li-O2 battery depends on oxidizing solid lithium peroxide (Li2O2), which is formed on discharge within the porous cathode. However, transporting charge between Li2O2 particles and the solid electrode surface is at best very difficult and leads to voltage polarization on charging, even at modest rates. This is a significant problem facing the non-aqueous Li-O2 battery. Here we show that incorporation of a redox mediator, tetrathiafulvalene (TTF), enables recharging at rates that are impossible for the cell in the absence of the mediator. On charging, TTF is oxidized to TTF(+) at the cathode surface; TTF(+) in turn oxidizes the solid Li2O2, which results in the regeneration of TTF. The mediator acts as an electron-hole transfer agent that permits efficient oxidation of solid Li2O2. The cell with the mediator demonstrated 100 charge/discharge cycles.

774 citations


Journal ArticleDOI
TL;DR: In this paper, the authors identify and isolate the concept of the "triple bottom line" (TBL) as a core and dominant idea that continues to pervade business reporting, and business engagement with sustainability.
Abstract: This paper offers a critique of sustainability reporting and, in particular, a critique of the modern disconnect between the practice of sustainability reporting and what we consider to be the urgent issue of our era: sustaining the life-supporting ecological systems on which humanity and other species depend. Tracing the history of such reporting developments, we identify and isolate the concept of the ‘triple bottom line’ (TBL) as a core and dominant idea that continues to pervade business reporting, and business engagement with sustainability. Incorporating an entity’s economic, environmental and social performance indicators into its management and reporting processes, we argue, has become synonymous with corporate sustainability; in the process, concern for ecology has become sidelined. Moreover, this process has become reinforced and institutionalised through SustainAbility’s biennial benchmarking reports, KPMG’s triennial surveys of practice, initiatives by the accountancy profession and, particularly, the Global Reporting Initiative (GRI)’s sustainability reporting guidelines. We argue that the TBL and the GRI are insufficient conditions for organizations contributing to the sustaining of the Earth’s ecology. Paradoxically, they may reinforce business-as-usual and greater levels of un-sustainability.

765 citations


Journal ArticleDOI
TL;DR: It is demonstrated that growing nano-size phases from perovskites can be controlled through judicious choice of composition, particularly by tuning deviations from the ideal ABO3 stoichiometry.
Abstract: Surfaces decorated with uniformly dispersed catalytically active nanoparticles play a key role in many fields, including renewable energy and catalysis. Typically, these structures are prepared by deposition techniques, but alternatively they could be made by growing the nanoparticles in situ directly from the (porous) backbone support. Here we demonstrate that growing nano-size phases from perovskites can be controlled through judicious choice of composition, particularly by tuning deviations from the ideal ABO3 stoichiometry. This non-stoichiometry facilitates a change in equilibrium position to make particle exsolution much more dynamic, enabling the preparation of compositionally diverse nanoparticles (that is, metallic, oxides or mixtures) and seems to afford unprecedented control over particle size, distribution and surface anchorage. The phenomenon is also shown to be influenced strongly by surface reorganization characteristics. The concept exemplified here may serve in the design and development of more sophisticated oxide materials with advanced functionality across a range of possible domains of application.

711 citations


Journal ArticleDOI
TL;DR: It is shown that a TiC-based cathode reduces greatly side reactions and exhibits better reversible formation/decomposition of Li2O2 even than nanoporous gold and is also four times lighter, of lower cost and easier to fabricate.
Abstract: Rechargeable lithium-air (O2) batteries are receiving intense interest because their high theoretical specific energy exceeds that of lithium-ion batteries. If the Li-O2 battery is ever to succeed, highly reversible formation/decomposition of Li2O2 must take place at the cathode on cycling. However, carbon, used ubiquitously as the basis of the cathode, decomposes during Li2O2 oxidation on charge and actively promotes electrolyte decomposition on cycling. Replacing carbon with a nanoporous gold cathode, when in contact with a dimethyl sulphoxide-based electrolyte, does seem to demonstrate better stability. However, nanoporous gold is not a suitable cathode; its high mass destroys the key advantage of Li-O2 over Li ion (specific energy), it is too expensive and too difficult to fabricate. Identifying a suitable cathode material for the Li-O2 cell is one of the greatest challenges at present. Here we show that a TiC-based cathode reduces greatly side reactions (arising from the electrolyte and electrode degradation) compared with carbon and exhibits better reversible formation/decomposition of Li2O2 even than nanoporous gold (>98% capacity retention after 100 cycles, compared with 95% for nanoporous gold); it is also four times lighter, of lower cost and easier to fabricate. The stability may originate from the presence of TiO2 (along with some TiOC) on the surface of TiC. In contrast to carbon or nanoporous gold, TiC seems to represent a more viable, stable, cathode for aprotic Li-O2 cells.

706 citations


Journal ArticleDOI
01 Jan 2013-BMJ Open
TL;DR: There are promising strategies for increasing recruitment to trials, but some methods, such as open-trial designs and opt-out strategies, must be considered carefully as their use may also present methodological or ethical challenges.
Abstract: Objective: To identify interventions designed to improve recruitment to randomised controlled trials, and to quantify their effect on trial participation. Design: Systematic review. Data sources: The Cochrane Methodology Review Group Specialised Register in the Cochrane Library, MEDLINE, EMBASE, ERIC, Science Citation Index, Social Sciences Citation Index, C2-SPECTR, the National Research Register and PubMed. Most searches were undertaken up to 2010; no language restrictions were applied. Study selection: Randomised and quasi-randomised controlled trials, including those recruiting to hypothetical studies. Studies on retention strategies, examining ways to increase questionnaire response or evaluating the use of incentives for clinicians were excluded. The study population included any potential trial participant (eg, patient, clinician and member of the public), or individual or group of individuals responsible for trial recruitment (eg, clinicians, researchers and recruitment sites). Two authors independently screened identified studies for eligibility. Results: 45 trials with over 43 000 participants were included. Some interventions were effective in increasing recruitment: telephone reminders to non-respondents (risk ratio (RR) 1.66, 95% CI 1.03 to 2.46; two studies, 1058 participants), use of opt-out rather than opt-in procedures for contacting potential participants (RR 1.39, 95% CI 1.06 to 1.84; one study, 152 participants) and open designs where participants know which treatment they are receiving in the trial (RR 1.22, 95% CI 1.09 to 1.36; two studies, 4833 participants). However, the effect of many other strategies is less clear, including the use of video to provide trial information and interventions aimed at recruiters.

546 citations


Journal ArticleDOI
TL;DR: In this paper, detrital zircons have been used to estimate that at least 60% to 70% of the present volume of the continental crust had been generated by 3 Ga, which may have been linked to the onset of signifi cant crustal recycling through subduction at convergent plate margins.
Abstract: Continental crust is the archive of Earth history. The spatial and temporal distribution of Earth’s record of rock units and events is heterogeneous; for example, ages of igneous crystallization, metamorphism, continental margins, mineralization, and sea water and atmospheric proxies are distributed about a series of peaks and troughs. This distribution refl ects the different preservation potential of rocks generated in different tectonic settings, rather than fundamental pulses of activity, and the peaks of ages are linked to the timing of supercontinent assembly. The physiochemical resilience of zircons and their derivation largely from felsic igneous rocks means that they are important indicators of the crustal record. Furthermore, detrital zircons, which sample a range of source rocks, provide a more representative record than direct analysis of grains in igneous rocks. Analysis of detrital zircons suggests that at least ~60%–70% of the present volume of the continental crust had been generated by 3 Ga. Such estimates seek to take account of the extent to which the old crustal material is underrepresented in the sedimentary record , and they imply that there were greater volumes of continental crust in the Archean than might be inferred from the compositions of detrital zircons and sediments. The growth of continental crust was a continuous rather than an episodic process, but there was a marked decrease in the rate of crustal growth at ca. 3 Ga, which may have been linked to the onset of signifi cant crustal recycling, probably through subduction at convergent plate margins. The Hadean and Early Archean continental record is poorly preserved and characterized by a bimodal TTG (tonalites, trondhjemites, and granodiorites) and greenstone association that differs from the younger record that can be more directly related to a plate-tectonic regime. The paucity of this early record has led to competing and equivocal models invoking plate-tectonic– and mantle-plume–dominated processes. The 60%–70% of the present volume of the continental crust estimated to have been present at 3 Ga contrasts markedly with the <10% of crust of that age apparently still preserved and requires on going destruction (recycling) of crust and subconti nental mantle lithosphere back into the mantle through processes such as subduction and delamination.

Journal ArticleDOI
01 Aug 2013-Geology
TL;DR: The South China craton was formed at the end of the Mesoproterozoic by Rodinia and occupied a position adjacent to Western Australia and northern India in the early NeoproTERozoic as discussed by the authors.
Abstract: From the formation of Rodinia at the end of the Mesoproterozoic to the commencement of Pangea breakup at the end of the Paleozoic, the South China craton fi rst formed and then occupied a position adjacent to Western Australia and northern India. Early Neoproterozoic suprasubduction zone magmatic arc-backarc assemblages in the craton range in age from ca. 1000 Ma to 820 Ma and display a sequential northwest decrease in age. These relations suggest formation and closure of arc systems through southeast-directed subduction, resulting in progressive northwestward accretion onto the periphery of an already assembled Rodinia. Siliciclastic units within an early Paleozoic succession that transgresses across the craton were derived from the southeast and include detritus from beyond the current limits of the craton. Detrital zircon age spectra require an East Gondwana source and are very similar to the Tethyan Himalaya and younger Paleozoic successions from Western Australia, suggesting derivation from a common source and by inference accumulation in linked basins along the northern margin of Gondwana, a situation that continued until rifting and breakup of the craton in the late Paleozoic.

Journal ArticleDOI
TL;DR: In this paper, the authors present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field, and provide a framework for acoustics-based density estimation, illustrated with real-world case studies.
Abstract: Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds, amphibians, and insects, especially in situations where inferences are required over long periods of time. There is considerable work ahead, with several potentially fruitful research areas, including the development of (i) hardware and software for data acquisition, (ii) efficient, calibrated, automated detection and classification systems, and (iii) statistical approaches optimized for this application. Further, survey design will need to be developed, and research is needed on the acoustic behaviour of target species. Fundamental research on vocalization rates and group sizes, and the relation between these and other factors such as season or behaviour state, is critical. Evaluation of the methods under known density scenarios will be important for empirically validating the approaches presented here.

Journal ArticleDOI
TL;DR: The method combines the best features of existing standard methodologies such as principal component and cluster analyses to provide a geometric representation of complex data sets to find subgroups in data sets that traditional methodologies fail to find.
Abstract: This paper applies topological methods to study complex high dimensional data sets by extracting shapes (patterns) and obtaining insights about them. Our method combines the best features of existing standard methodologies such as principal component and cluster analyses to provide a geometric representation of complex data sets. Through this hybrid method, we often find subgroups in data sets that traditional methodologies fail to find. Our method also permits the analysis of individual data sets as well as the analysis of relationships between related data sets. We illustrate the use of our method by applying it to three very different kinds of data, namely gene expression from breast tumors, voting data from the United States House of Representatives and player performance data from the NBA, in each case finding stratifications of the data which are more refined than those produced by standard methods.

Journal ArticleDOI
TL;DR: The authors surveys recent theory and evidence on strategic thinking and illustrates the applications of level-k models in economics and shows that even when learning is possible and converges to equilibrium, such models allow better predictions of history-dependent limiting outcomes.
Abstract: Most applications of game theory assume equilibrium, justified by presuming either that learning will have converged to one, or that equilibrium approximates people’s strategic thinking even when a learning justification is implausible. Yet several recent experimental and empirical studies suggest that people’s initial responses to games often deviate systematically from equilibrium, and that structural nonequilibrium “level-k” or “cognitive hierarchy” models often out-predict equilibrium. Even when learning is possible and converges to equilibrium, such models allow better predictions of history-dependent limiting outcomes. This paper surveys recent theory and evidence on strategic thinking and illustrates the applications of level-k models in economics. (JEL C70, D03, D82, D83)

Journal ArticleDOI
TL;DR: In this paper, the ultraviolet (UV) galaxy luminosity function (LF) at redshift z ≃ 7 and 8 was determined, and a first estimate at z ≥ 9.
Abstract: We present a new determination of the ultraviolet (UV) galaxy luminosity function (LF) at redshift z ≃ 7 and 8, and a first estimate at z ≃ 9. An accurate determination of the form and evolution of the galaxy LF during this era is of key importance for improving our knowledge of the earliest phases of galaxy evolution and the process of cosmic reionization. Our analysis exploits to the full the new, deepest Wide Field Camera 3/infrared imaging from our Hubble Space Telescope (HST) Ultra-Deep Field 2012 (UDF12) campaign, with dynamic range provided by including a new and consistent analysis of all appropriate, shallower/wider area HST survey data. Our new measurement of the evolving LF at z ≃ 7 to 8 is based on a final catalogue of ≃600 galaxies, and involves a step-wise maximum-likelihood determination based on the photometric redshift probability distribution for each object; this approach makes full use of the 11-band imaging now available in the Hubble Ultra-Deep Field (HUDF), including the new UDF12 F140W data, and the latest Spitzer IRAC imaging. The final result is a determination of the z ≃ 7 LF extending down to UV absolute magnitudes M_1500 = −16.75 (AB mag) and the z ≃ 8 LF down to M_1500 = −17.00. Fitting a Schechter function, we find M*_1500 = −19.90^(+0.23)_(−0.28), log ϕ* = −2.96^(+0.18)_(−0.23) and a faint-end slope α = −1.90^(+0.14)_(−0.15) at z ≃ 7, and M*_1500 = −20.12^(+0.37)_(−0.48), log ϕ* = −3.35^(+0.28)_(−0.47) and α = −2.02^(+0.22)_(-0.23) at z ≃ 8. These results strengthen previous suggestions that the evolution at z > 7 appears more akin to ‘density evolution’ than the apparent ‘luminosity evolution’ seen at z ≃ 5 − 7. We also provide the first meaningful information on the LF at z ≃ 9, explore alternative extrapolations to higher redshifts, and consider the implications for the early evolution of UV luminosity density. Finally, we provide catalogues (including derived z_phot, M_1500 and photometry) for the most robust z ∼ 6.5-11.9 galaxies used in this analysis. We briefly discuss our results in the context of earlier work and the results derived from an independent analysis of the UDF12 data based on colour–colour selection.

Journal ArticleDOI
TL;DR: The dual syndrome hypothesis is distinguished, which distinguishes between dopaminergically mediated fronto-striatal executive impairments and a dementia syndrome with distinctive prodromal visuospatial deficits in which cholinergic treatments offer some clinical benefits.
Abstract: Research into the heterogeneous nature of cognitive impairment documented in patients with Parkinson's disease (PD) has focused on disentangling deficits that vary between individuals, evolve and respond differentially to pharmacological treatments, and relate differentially to PD dementia (PDD). We summarise studies conducted in our laboratory over the last 2 decades, outlining the incremental development of our hypotheses, the starting point for which is our early work on executive deficits mirroring fronto-striatal dysfunction. We present subsequent findings linking these deficits to a model of dopaminergic function that conforms to an inverted curvilinear function. We review studies that investigated the range of dopamine-independent attentional and visuospatial memory deficits seen in PD, demonstrating that abnormalities in these domains more accurately predict PDD. We conclude with an exposition of the dual syndrome hypothesis, which distinguishes between dopaminergically mediated fronto-striatal executive impairments and a dementia syndrome with distinctive prodromal visuospatial deficits in which cholinergic treatments offer some clinical benefits.

Book
01 Jul 2013
TL;DR: In this paper, the maximal subgroups of almost simple finite classical groups in dimension up to 12 were classified and the maximal subsets of the almost simple groups with socle one of Sz(q), G2 (q), 2G2(q) or 3D4(q).
Abstract: This book classifies the maximal subgroups of the almost simple finite classical groups in dimension up to 12; it also describes the maximal subgroups of the almost simple finite exceptional groups with socle one of Sz(q), G2(q), 2G2(q) or 3D4(q). Theoretical and computational tools are used throughout, with downloadable Magma code provided. The exposition contains a wealth of information on the structure and action of the geometric subgroups of classical groups, but the reader will also encounter methods for analysing the structure and maximality of almost simple subgroups of almost simple groups. Additionally, this book contains detailed information on using Magma to calculate with representations over number fields and finite fields. Featured within are previously unseen results and over 80 tables describing the maximal subgroups, making this volume an essential reference for researchers. It also functions as a graduate-level textbook on finite simple groups, computational group theory and representation theory.

Journal ArticleDOI
26 Apr 2013-Science
TL;DR: Here, it is shown experimentally that wild vervet monkeys will abandon personal foraging preferences in favor of group norms new to them, a more potent force than hitherto recognized in shaping group differences among wild animals.
Abstract: Conformity to local behavioral norms reflects the pervading role of culture in human life. Laboratory experiments have begun to suggest a role for conformity in animal social learning, but evidence from the wild remains circumstantial. Here, we show experimentally that wild vervet monkeys will abandon personal foraging preferences in favor of group norms new to them. Groups first learned to avoid the bitter-tasting alternative of two foods. Presentations of these options untreated months later revealed that all new infants naive to the foods adopted maternal preferences. Males who migrated between groups where the alternative food was eaten switched to the new local norm. Such powerful effects of social learning represent a more potent force than hitherto recognized in shaping group differences among wild animals.

Journal ArticleDOI
15 Nov 2013-Science
TL;DR: The spread of tree diseases, as a result of globalization and climate change, is reviewed, and the resulting damage to timber and fruit production, to climate regulation, and to parks and woodlands caused by tree diseases is analyzed.
Abstract: Background Trees are major components of many terrestrial ecosystems and are grown in managed plantations and orchards to provide a variety of economically important products, including timber, pulp, fiber, and food. They are subject to a wide range of pests and diseases, of which the most important causative agents are viruses, bacteria, fungi, oomycetes, and insect herbivores. Research on tree pests and diseases has had a historical focus on trees of direct economic importance. However, some epidemics and infestations have damaged and killed common trees that are integral parts of natural ecosystems. These have harmed valuable landscapes and highlighted the wide-ranging consequences arising from tree pests and diseases. There is also growing concern that aspects of globalization—in particular, higher volumes and new forms of trade—may increase the risk of disease spread. A forest providing numerous ecosystem services is subject to a disease epidemic that reduces the abundance of a dominant native species, resulting in a change in forest structure. Initially, a wide range of ecosystem services (A to D) are harmed. But as trees grow to replace lost species, some (perhaps carbon storage or water purification) are regained, whereas others (perhaps the biodiversity supported by the diseased tree species) are permanently disrupted. Policy measures can both help prevent new diseases being introduced (the first stage) or improve recovery through management practices or planting resistant trees. Advances We review the challenges in maintaining tree health in natural and managed ecosystems. It is argued that it is helpful to consider explicitly the consequences of pests and diseases for the full range of ecosystem services provided by trees. In addition to forest and orchard products, tree pests and diseases can affect the ability of forests to sequester and store carbon, reduce flood risk, and purify water. They can affect the biodiversity supported by trees and the recreational and cultural values accorded to woodland by people. Many of these benefits are uncosted and enjoyed by different classes of stakeholders, which raises difficult questions about who should be responsible for measures to protect tree health. Changes in the risk of pest and disease introduction, the increasing prevalence of genetic reassortment leading to novel disease threats, and the potential role of climate change are all highlighted. Outlook Modern pest and disease management is based on an extensive science base that is rapidly developing, spurred in particular by modern molecular technologies. A research priority is to build a better understanding of why certain pathogens and insects become major pests and diseases. This will involve a better understanding of the molecular basis of pathogenicity and herbivory, as will ecological insights into why some species reach epidemic prevalence or abundance. It will also help anticipate which species may become a problem if they are transported to new geographical regions, recombine with other organisms, or experience new climatic conditions. However, identifying all species that may become pests will be impossible, and the Review stresses the importance of risk management at the “pathway of introduction” level, especially when modern trade practices provide potential new routes of entry. Last, when ecosystem services are provided by woods and forests rather than individual tree species, we need to understand better the consequences of pests and diseases that attack or feed on particular species.

Journal ArticleDOI
15 Feb 2013-Science
TL;DR: It is found that the resistivity of the quantum critical metal Sr3Ru2O7 is also T-linear at the critical magnetic field of 7.9 T, and the scattering rate per kelvin is well approximated by the ratio of the Boltzmann constant to the Planck constant divided by 2π.
Abstract: Many exotic compounds, such as cuprate superconductors and heavy fermion materials, exhibit a linear in temperature (T) resistivity, the origin of which is not well understood. We found that the resistivity of the quantum critical metal Sr(3)Ru(2)O(7) is also T-linear at the critical magnetic field of 7.9 T. Using the precise existing data for the Fermi surface topography and quasiparticle velocities of Sr(3)Ru(2)O(7), we show that in the region of the T-linear resistivity, the scattering rate per kelvin is well approximated by the ratio of the Boltzmann constant to the Planck constant divided by 2π. Extending the analysis to a number of other materials reveals similar results in the T-linear region, in spite of large differences in the microscopic origins of the scattering.

Journal ArticleDOI
TL;DR: GS:SFHS is a family-based genetic epidemiology study with DNA and socio-demographic and clinical data from about 24 000 volunteers across Scotland from February 2006 to March 2011 to maximize the power of the resource to identify, replicate or control for genetic factors associated with a wide spectrum of illnesses and risk factors.
Abstract: GS:SFHS is a family-based genetic epidemiology study with DNA and socio-demographic and clinical data from about 24 000 volunteers across Scotland, aged 18–98 years, from February 2006 to March 2011. Biological samples and anonymized data form a resource for research on the genetics of health, disease and quantitative traits of current and projected public health importance. Specific and important features of GS:SFHS include the family-based recruitment, with the intent of obtaining family groups; the breadth and depth of phenotype information, including detailed data on cognitive function, personality traits and mental health; consent and mechanisms for linkage of all data to comprehensive routine health-care records; and ‘broad’ consent from participants to use their data and samples for a wide range of medical research, including commercial research, and for re-contact for the potential collection of other data or samples, or for participation in related studies and the design and review of the protocol in parallel with in-depth sociological research on (potential) participants and users of the research outcomes. These features were designed to maximize the power of the resource to identify, replicate or control for genetic factors associated with a wide spectrum of illnesses and risk factors, both now and in the future.

Journal ArticleDOI
TL;DR: For the first time, artificial ammonia synthesis bypassing N2 separation and H2 production stages is reported, and potentially this can provide an alternative route for the mass production of the basic chemical ammonia under mild conditions.
Abstract: The N≡N bond (225 kcal mol−1) in dinitrogen is one of the strongest bonds in chemistry therefore artificial synthesis of ammonia under mild conditions is a significant challenge Based on current knowledge, only bacteria and some plants can synthesise ammonia from air and water at ambient temperature and pressure Here, for the first time, we report artificial ammonia synthesis bypassing N2 separation and H2 production stages A maximum ammonia production rate of 114 × 10−5 mol m−2 s−1 has been achieved when a voltage of 16 V was applied Potentially this can provide an alternative route for the mass production of the basic chemical ammonia under mild conditions Considering climate change and the depletion of fossil fuels used for synthesis of ammonia by conventional methods, this is a renewable and sustainable chemical synthesis process for future

Journal ArticleDOI
TL;DR: A top-down strategy that involves the disassembly of a parent zeolite, UTL, and its reassembly into two zeolites with targeted topologies, I PC-2 and IPC-4 is reported, enabling the synthesis of materials with predetermined pore architectures.
Abstract: The properties of zeolites, and thus their suitability for different applications, are intimately connected with their structures. Synthesizing specific architectures is therefore important, but has remained challenging. Here we report a top-down strategy that involves the disassembly of a parent zeolite, UTL, and its reassembly into two zeolites with targeted topologies, IPC-2 and IPC-4. The three zeolites are closely related as they adopt the same layered structure, and they differ only in how the layers are connected. Choosing different linkers gives rise to different pore sizes, enabling the synthesis of materials with predetermined pore architectures. The structures of the resulting zeolites were characterized by interpreting the X-ray powder-diffraction patterns through models using computational methods; IPC-2 exhibits orthogonal 12- and ten-ring channels, and IPC-4 is a more complex zeolite that comprises orthogonal ten- and eight-ring channels. We describe how this method enables the preparation of functional materials and discuss its potential for targeting other new zeolites.


Journal ArticleDOI
TL;DR: In this paper, the authors consider how one of the oldest and most widely applied statistical methods, principal components analysis (PCA), is employed with spatial data, and identify four main methodologies, which are defined as (1) PCA applied to spatial objects, (2) PCAs applied to raster data, (3) atmospheric science PCA, and (4)PCA on flows.
Abstract: This article considers critically how one of the oldest and most widely applied statistical methods, principal components analysis (PCA), is employed with spatial data. We first provide a brief guide to how PCA works: This includes robust and compositional PCA variants, links to factor analysis, latent variable modeling, and multilevel PCA. We then present two different approaches to using PCA with spatial data. First we look at the nonspatial approach, which avoids challenges posed by spatial data by using a standard PCA on attribute space only. Within this approach we identify four main methodologies, which we define as (1) PCA applied to spatial objects, (2) PCA applied to raster data, (3) atmospheric science PCA, and (4) PCA on flows. In the second approach, we look at PCA adapted for effects in geographical space by looking at PCA methods adapted for first-order nonstationary effects (spatial heterogeneity) and second-order stationary effects (spatial autocorrelation). We also describe how PCA can be...

Journal ArticleDOI
26 Apr 2013-Science
TL;DR: Network-based diffusion analysis is used to reveal the cultural spread of a naturally occurring foraging innovation, lobtail feeding, through a population of humpback whales over a period of 27 years, strengthening the case that cetaceans represent a peak in the evolution of nonhuman culture, independent of the primate lineage.
Abstract: We used network-based diffusion analysis to reveal the cultural spread of a naturally occurring foraging innovation, lobtail feeding, through a population of humpback whales (Megaptera novaeangliae) over a period of 27 years. Support for models with a social transmission component was 6 to 23 orders of magnitude greater than for models without. The spatial and temporal distribution of sand lance, a prey species, was also important in predicting the rate of acquisition. Our results, coupled with existing knowledge about song traditions, show that this species can maintain multiple independently evolving traditions in its populations. These insights strengthen the case that cetaceans represent a peak in the evolution of nonhuman culture, independent of the primate lineage.

Journal ArticleDOI
TL;DR: This work uses nanoscale photocurrent mapping, ultrafast fluorescence and exciton diffusion to observe the detailed morphology of a high-performance blend of PTB7:PC71BM, and shows that optimized blends consist of elongated fullerene-rich and polymer-rich fibre-like domains.
Abstract: The morphology of bulk heterojunction organic photovoltaic cells controls many of the performance characteristics of devices. However, measuring this morphology is challenging because of the small length-scales and low contrast between organic materials. Here we use nanoscale photocurrent mapping, ultrafast fluorescence and exciton diffusion to observe the detailed morphology of a high-performance blend of PTB7:PC71BM. We show that optimized blends consist of elongated fullerene-rich and polymer-rich fibre-like domains, which are 10–50 nm wide and 200–400 nm long. These elongated domains provide a concentration gradient for directional charge diffusion that helps in the extraction of charge pairs with 80% efficiency. In contrast, blends with agglomerated fullerene domains show a much lower efficiency of charge extraction of B45%, which is attributed to poor electron and hole transport. Our results show that the formation of narrow and elongated domains is desirable for efficient bulk heterojunction solar cells.

Journal ArticleDOI
TL;DR: This year is the 50th anniversary of Tinbergen's article 'On aims and methods of ethology', where he first outlined the four 'major problems of biology', and it would seem a suitable opportunity to reflect on the four questions and evaluate the scientific work that they encourage.
Abstract: This year is the 50th anniversary of Tinbergen’s (1963) article ‘On aims and methods of ethology’, where he first outlined the four ‘major problems of biology’. The classification of the four problems, or questions, is one of Tinbergen’s most enduring legacies, and it remains as valuable today as 50 years ago in highlighting the value of a comprehensive, multifaceted understanding of a characteristic, with answers to each question providing complementary insights. Nonetheless, much has changed in the intervening years, and new data call for a more nuanced application of Tinbergen’s framework. The anniversary would seem a suitable opportunity to reflect on the four questions and evaluate the scientific work that they encourage. Origins of Tinbergen’s questions

Journal ArticleDOI
TL;DR: This paper describes the implementation of Idris, a new dependently typed functional programming language, and presents a tactic-based method for elaborating concrete high-level syntax with implicit arguments and type classes into a fully explicit type theory.
Abstract: Many components of a dependently-typed programming language are by now well understood, for example the underlying type theory, type checking, unification and evaluation. How to combine these components into a realistic and usable high-level language is, however, folklore, discovered anew by successive language implementators. In this paper, I describe the implementation of IDRIS, a new dependently-typed functional programming language. IDRIS is intended to be a general purpose programming language and as such provides high-level concepts such as implicit syntax, type classes and do notation. I describe the high-level language and the underlying type theory, and present a tactic-based method for elaborating concrete high-level syntax with implicit arguments and type classes into a fully explicit type theory. Furthermore, I show how this method facilitates the implementation of new high-level language constructs.