scispace - formally typeset
Search or ask a question

Showing papers by "University of Cambridge published in 2008"


Journal ArticleDOI
TL;DR: Mercury as discussed by the authors is a crystal structure visualization tool that allows highly customizable searching of structural databases for intermolecular interaction motifs and packing patterns, as well as the ability to perform packing similarity calculations between structures containing the same compound.
Abstract: The program Mercury, developed by the Cambridge Crystallographic Data Centre, is designed primarily as a crystal structure visualization tool. A new module of functionality has been produced, called the Materials Module, which allows highly customizable searching of structural databases for intermolecular interaction motifs and packing patterns. This new module also includes the ability to perform packing similarity calculations between structures containing the same compound. In addition to the Materials Module, a range of further enhancements to Mercury has been added in this latest release, including void visualization and links to ConQuest, Mogul and IsoStar.

7,879 citations


Journal ArticleDOI
TL;DR: Graphene dispersions with concentrations up to approximately 0.01 mg ml(-1), produced by dispersion and exfoliation of graphite in organic solvents such as N-methyl-pyrrolidone are demonstrated.
Abstract: Fully exploiting the properties of graphene will require a method for the mass production of this remarkable material. Two main routes are possible: large-scale growth or large-scale exfoliation. Here, we demonstrate graphene dispersions with concentrations up to approximately 0.01 mg ml(-1), produced by dispersion and exfoliation of graphite in organic solvents such as N-methyl-pyrrolidone. This is possible because the energy required to exfoliate graphene is balanced by the solvent-graphene interaction for solvents whose surface energies match that of graphene. We confirm the presence of individual graphene sheets by Raman spectroscopy, transmission electron microscopy and electron diffraction. Our method results in a monolayer yield of approximately 1 wt%, which could potentially be improved to 7-12 wt% with further processing. The absence of defects or oxides is confirmed by X-ray photoelectron, infrared and Raman spectroscopies. We are able to produce semi-transparent conducting films and conducting composites. Solution processing of graphene opens up a range of potential large-area applications, from device and sensor fabrication to liquid-phase chemistry.

5,600 citations


Journal ArticleDOI
TL;DR: The cclib platform as discussed by the authors is a platform for the development of package-independent computational chemistry algorithms, which can automatically detect, parse, and convert the extracted information into a standard internal representation.
Abstract: There are now a wide variety of packages for electronic structure calculations, each of which differs in the algorithms implemented and the output format. Many computational chemistry algorithms are only available to users of a particular package despite being generally applicable to the results of calculations by any package. Here we present cclib, a platform for the development of package-independent computational chemistry algorithms. Files from several versions of multiple electronic structure packages are automatically detected, parsed, and the extracted information converted to a standard internal representation. A number of population analysis algorithms have been implemented as a proof of principle. In addition, cclib is currently used as an input filter for two GUI applications that analyze output files: PyMOlyze and GaussSum. © 2007 Wiley Periodicals, Inc. J Comput Chem, 2008

4,451 citations


Journal ArticleDOI
06 Nov 2008-Nature
TL;DR: An approach that generates several billion bases of accurate nucleotide sequence per experiment at low cost is reported, effective for accurate, rapid and economical whole-genome re-sequencing and many other biomedical applications.
Abstract: DNA sequence information underpins genetic research, enabling discoveries of important biological or medical benefit. Sequencing projects have traditionally used long (400-800 base pair) reads, but the existence of reference sequences for the human and many other genomes makes it possible to develop new, fast approaches to re-sequencing, whereby shorter reads are compared to a reference to identify intraspecies genetic variation. Here we report an approach that generates several billion bases of accurate nucleotide sequence per experiment at low cost. Single molecules of DNA are attached to a flat surface, amplified in situ and used as templates for synthetic sequencing with fluorescent reversible terminator deoxyribonucleotides. Images of the surface are analysed to generate high-quality sequence. We demonstrate application of this approach to human genome sequencing on flow-sorted X chromosomes and then scale the approach to determine the genome sequence of a male Yoruba from Ibadan, Nigeria. We build an accurate consensus sequence from >30x average depth of paired 35-base reads. We characterize four million single-nucleotide polymorphisms and four hundred thousand structural variants, many of which were previously unknown. Our approach is effective for accurate, rapid and economical whole-genome re-sequencing and many other biomedical applications.

3,802 citations


Journal ArticleDOI
TL;DR: This work demonstrates a top-gated graphene transistor that is able to reach doping levels of up to 5x1013 cm-2, which is much higher than those previously reported.
Abstract: The recent discovery of graphene has led to many advances in two-dimensional physics and devices. The graphene devices fabricated so far have relied on $SiO_2$ back gating. Electrochemical top gating is widely used for polymer transistors, and has also been successfully applied to carbon nanotubes. Here we demonstrate a top-gated graphene transistor that is able to reach doping levels of up to $5\times 10^{13} cm^{-2}$, which is much higher than those previously reported. Such high doping levels are possible because the nanometre-thick Debye layer in the solid polymer electrolyte gate provides a much higher gate capacitance than the commonly used $SiO_2$ back gate, which is usually about 300 nm thick. In situ Raman measurements monitor the doping. The G peak stiffens and sharpens for both electron and hole doping, but the 2D peak shows a different response to holes and electrons. The ratio of the intensities of the G and 2D peaks shows a strong dependence on doping, making it a sensitive parameter to monitor the doping.

3,254 citations


Book
01 Jan 2008
TL;DR: The authors provides both scientists and engineers with all the information they need to understand composite materials, covering their underlying science and technological usage, including surface coatings, highly porous materials, bio-composites and nano-com composites, as well as thoroughly revised chapters on fibres and matrices.
Abstract: This fully expanded and updated edition provides both scientists and engineers with all the information they need to understand composite materials, covering their underlying science and technological usage It includes four completely new chapters on surface coatings, highly porous materials, bio-composites and nano-composites, as well as thoroughly revised chapters on fibres and matrices, the design, fabrication and production of composites, mechanical and thermal properties, and industry applications Extensively expanded referencing engages readers with the latest research and industrial developments in the field, and increased coverage of essential background science makes this a valuable self-contained text A comprehensive set of homework questions, with model answers available online, explains how calculations associated with the properties of composite materials should be tackled, and educational software accompanying the book is available online An invaluable text for final-year undergraduates in materials science and engineering, and graduate students and researchers in academia and industry

2,746 citations


Journal ArticleDOI
TL;DR: The results strongly confirm 11 previously reported loci and provide genome-wide significant evidence for 21 additional loci, including the regions containing STAT3, JAK2, ICOSLG, CDKAL1 and ITLN1, which offer promise for informed therapeutic development.
Abstract: Several risk factors for Crohn's disease have been identified in recent genome-wide association studies. To advance gene discovery further, we combined data from three studies on Crohn's disease (a total of 3,230 cases and 4,829 controls) and carried out replication in 3,664 independent cases with a mixture of population-based and family-based controls. The results strongly confirm 11 previously reported loci and provide genome-wide significant evidence for 21 additional loci, including the regions containing STAT3, JAK2, ICOSLG, CDKAL1 and ITLN1. The expanded molecular understanding of the basis of this disease offers promise for informed therapeutic development.

2,584 citations


Book ChapterDOI
30 May 2008
TL;DR: Comparisons can now be made between the kinetics of the ionic conductances as described by Hodgkin & Huxley, and the steady-state distribution and kinetic changes of the charged controlling particles, which should lead to useful conclusions about the intramolecular organization of the sodium channels and the conformational changes that take place under the influence of the electric field.
Abstract: The ionic channels in excitable membranes are of two classes: those that open and close when the membrane potential alters and those that respond to the release of an appropriate chemical transmitter. The former are responsible for the conduction of impulses in nerve and muscle fibres and the latter for synaptic transmission. It is now clear that the sodium and potassium channels in electrically excitable membranes are functionally distinct, since each can be blocked without affecting the behaviour of the other. It has recently proved possible to study, in the voltage-clamped squid giant axon, the movements of the mobile charges or dipoles that form the voltage-sensitive portion of the sodium channels, which give rise to the so-called 'gating' current. Detailed comparisons can now be made between the kinetics of the ionic conductances as described by Hodgkin & Huxley, and the steady-state distribution and kinetics of the charged controlling particles, which should lead to useful conclusions about the intramolecular organization of the sodium channels and the conformational changes that take place under the influence of the electric field. There is as yet little information about the chemical nature of the electrically excitable channels, but significant progress has been made towards the isolation and characterization of the acetylcholine receptors in muscle and electric organ.

2,489 citations


Journal ArticleDOI
TL;DR: How noise affects neuronal networks and the principles the nervous system applies to counter detrimental effects of noise are highlighted, and noise's potential benefits are discussed.
Abstract: Noise — random disturbances of signals — poses a fundamental problem for information processing and affects all aspects of nervous-system function. However, the nature, amount and impact of noise in the nervous system have only recently been addressed in a quantitative manner. Experimental and computational methods have shown that multiple noise sources contribute to cellular and behavioural trial-to-trial variability. We review the sources of noise in the nervous system, from the molecular to the behavioural level, and show how noise contributes to trial-to-trial variability. We highlight how noise affects neuronal networks and the principles the nervous system applies to counter detrimental effects of noise, and briefly discuss noise's potential benefits.

2,350 citations


Journal ArticleDOI
TL;DR: A set of guidelines for the selection and interpretation of the methods that can be used by investigators who are attempting to examine macroautophagy and related processes, as well as by reviewers who need to provide realistic and reasonable critiques of papers that investigate these processes are presented.
Abstract: Research in autophagy continues to accelerate,(1) and as a result many new scientists are entering the field Accordingly, it is important to establish a standard set of criteria for monitoring macroautophagy in different organisms Recent reviews have described the range of assays that have been used for this purpose(2,3) There are many useful and convenient methods that can be used to monitor macroautophagy in yeast, but relatively few in other model systems, and there is much confusion regarding acceptable methods to measure macroautophagy in higher eukaryotes A key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers of autophagosomes versus those that measure flux through the autophagy pathway; thus, a block in macroautophagy that results in autophagosome accumulation needs to be differentiated from fully functional autophagy that includes delivery to, and degradation within, lysosomes (in most higher eukaryotes) or the vacuole (in plants and fungi) Here, we present a set of guidelines for the selection and interpretation of the methods that can be used by investigators who are attempting to examine macroautophagy and related processes, as well as by reviewers who need to provide realistic and reasonable critiques of papers that investigate these processes This set of guidelines is not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to verify an autophagic response

2,310 citations


Journal ArticleDOI
A. A. Alves, L. M. Andrade Filho1, A. F. Barbosa, Ignacio Bediaga  +886 moreInstitutions (64)
TL;DR: The LHCb experiment is dedicated to precision measurements of CP violation and rare decays of B hadrons at the Large Hadron Collider (LHC) at CERN (Geneva).
Abstract: The LHCb experiment is dedicated to precision measurements of CP violation and rare decays of B hadrons at the Large Hadron Collider (LHC) at CERN (Geneva). The initial configuration and expected performance of the detector and associated systems, as established by test beam measurements and simulation studies, is described.

Journal ArticleDOI
TL;DR: The results show that using COPE at the forwarding layer, without modifying routing and higher layers, increases network throughput, and the gains vary from a few percent to several folds depending on the traffic pattern, congestion level, and transport protocol.
Abstract: This paper proposes COPE, a new architecture for wireless mesh networks. In addition to forwarding packets, routers mix (i.e., code) packets from different sources to increase the information content of each transmission. We show that intelligently mixing packets increases network throughput. Our design is rooted in the theory of network coding. Prior work on network coding is mainly theoretical and focuses on multicast traffic. This paper aims to bridge theory with practice; it addresses the common case of unicast traffic, dynamic and potentially bursty flows, and practical issues facing the integration of network coding in the current network stack. We evaluate our design on a 20-node wireless network, and discuss the results of the first testbed deployment of wireless network coding. The results show that using COPE at the forwarding layer, without modifying routing and higher layers, increases network throughput. The gains vary from a few percent to several folds depending on the traffic pattern, congestion level, and transport protocol.

Journal ArticleDOI
Eleftheria Zeggini1, Laura J. Scott2, Richa Saxena, Benjamin F. Voight, Jonathan Marchini3, T Hu2, de Bakker Piw.4, de Bakker Piw.5, de Bakker Piw.6, Gonçalo R. Abecasis2, Peter Almgren7, Gregers S. Andersen8, Kristin Ardlie4, Kristina Bengtsson Boström, Richard N. Bergman9, Lori L. Bonnycastle10, Knut Borch-Johnsen11, Knut Borch-Johnsen8, Noël P. Burtt4, H Chen12, Peter S. Chines10, Mark J. Daly, P Deodhar10, Ding C-J.2, Doney Asf.13, William L. Duren2, Katherine S. Elliott1, Mike Erdos10, Timothy M. Frayling14, Rachel M. Freathy14, Lauren Gianniny4, Harald Grallert, Niels Grarup8, Christopher J. Groves3, Candace Guiducci4, Torben Hansen8, Christian Herder15, Graham A. Hitman16, Thomas Edward Hughes12, Bo Isomaa, Anne U. Jackson2, Torben Jørgensen17, Augustine Kong18, Kari Kubalanza10, Finny G Kuruvilla4, Finny G Kuruvilla5, Johanna Kuusisto19, Claudia Langenberg20, Hana Lango14, Torsten Lauritzen21, Yun Li2, Cecilia M. Lindgren1, Cecilia M. Lindgren3, Valeriya Lyssenko7, Amanda F. Marvelle22, Christine Meisinger, Kristian Midthjell23, Karen L. Mohlke22, Mario A. Morken10, Andrew D. Morris13, Narisu Narisu10, Peter M. Nilsson7, Katharine R. Owen3, Palmer Cna.13, Felicity Payne24, Perry Jrb.14, E Pettersen23, Carl Platou23, Inga Prokopenko1, Inga Prokopenko3, Lu Qi5, Lu Qi6, L Qin22, Nigel W. Rayner1, Nigel W. Rayner3, Matthew G. Rees10, J J Roix12, A Sandbaek11, Beverley M. Shields, Marketa Sjögren7, Valgerdur Steinthorsdottir18, Heather M. Stringham2, Amy J. Swift10, Gudmar Thorleifsson18, Unnur Thorsteinsdottir18, Nicholas J. Timpson25, Nicholas J. Timpson1, Tiinamaija Tuomi26, Jaakko Tuomilehto26, Mark Walker27, Richard M. Watanabe9, Michael N. Weedon14, Cristen J. Willer2, Thomas Illig, Kristian Hveem23, Frank B. Hu6, Frank B. Hu5, Markku Laakso19, Kari Stefansson18, Oluf Pedersen8, Oluf Pedersen11, Nicholas J. Wareham20, Inês Barroso24, Andrew T. Hattersley14, Francis S. Collins10, Leif Groop7, Leif Groop26, Mark I. McCarthy1, Mark I. McCarthy3, Michael Boehnke2, David Altshuler 
TL;DR: The results illustrate the value of large discovery and follow-up samples for gaining further insights into the inherited basis of T2D, and detect at least six previously unknown loci with robust evidence for association.
Abstract: Genome-wide association (GWA) studies have identified multiple loci at which common variants modestly but reproducibly influence risk of type 2 diabetes (T2D). Established associations to common and rare variants explain only a small proportion of the heritability of T2D. As previously published analyses had limited power to identify variants with modest effects, we carried out meta-analysis of three T2D GWA scans comprising 10,128 individuals of European descent and approximately 2.2 million SNPs (directly genotyped and imputed), followed by replication testing in an independent sample with an effective sample size of up to 53,975. We detected at least six previously unknown loci with robust evidence for association, including the JAZF1 (P = 5.0 x 10(-14)), CDC123-CAMK1D (P = 1.2 x 10(-10)), TSPAN8-LGR5 (P = 1.1 x 10(-9)), THADA (P = 1.1 x 10(-9)), ADAMTS9 (P = 1.2 x 10(-8)) and NOTCH2 (P = 4.1 x 10(-8)) gene regions. Our results illustrate the value of large discovery and follow-up samples for gaining further insights into the inherited basis of T2D.

Journal ArticleDOI
TL;DR: Herwig++ as mentioned in this paper is a general-purpose Monte Carlo event generator for the simulation of hard lepton-lepton, leptonhadron and hadron-hadron collisions, with special emphasis on the correct description of radiation from heavy particles.
Abstract: In this paper we describe Herwig++ version 2.3, a general-purpose Monte Carlo event generator for the simulation of hard lepton-lepton, lepton-hadron and hadron-hadron collisions. A number of important hard scattering processes are available, together with an interface via the Les Houches Accord to specialized matrix element generators for additional processes. The simulation of Beyond the Standard Model (BSM) physics includes a range of models and allows new models to be added by encoding the Feynman rules of the model. The parton-shower approach is used to simulate initial- and final-state QCD radiation, including colour coherence effects, with special emphasis on the correct description of radiation from heavy particles. The underlying event is simulated using an eikonal multiple parton-parton scattering model. The formation of hadrons from the quarks and gluons produced in the parton shower is described using the cluster hadronization model. Hadron decays are simulated using matrix elements, where possible including spin correlations and off-shell effects. Comment: 153 pages, program and additional information available from http://projects.hepforge.org/herwig . Updated description to Herwig++ version 2.3 and added one author

Journal ArticleDOI
TL;DR: In this article, a standardized version of Swamy's test of slope homogeneity for panel data models where the cross section dimension (N ) could be large relative to the time series dimension (T ) was proposed.

Journal ArticleDOI
TL;DR: In this article, the authors deal with the fundamental understanding of the process and its metallurgical consequences, focusing on heat generation, heat transfer and plastic flow during welding, elements of tool design, understanding defect formation and the structure and properties of the welded materials.

Journal ArticleDOI
TL;DR: It is suggested that both general adiposity and abdominal adiposity are associated with the risk of death and support the use of waist circumference or waist-to-hip ratio in addition to BMI in assessing therisk of death.
Abstract: Background Previous studies have relied predominantly on the body-mass index (BMI, the weight in kilograms divided by the square of the height in meters) to assess the association of adiposity with the risk of death, but few have examined whether the distribution of body fat contributes to the prediction of death. Methods We examined the association of BMI, waist circumference, and waist-to-hip ratio with the risk of death among 359,387 participants from nine countries in the European Prospective Investigation into Cancer and Nutrition (EPIC). We used a Cox regression analysis, with age as the time variable, and stratified the models according to study center and age at recruitment, with further adjustment for educational level, smoking status, alcohol consumption, physical activity, and height. Results During a mean follow-up of 9.7 years, 14,723 participants died. The lowest risks of death related to BMI were observed at a BMI of 25.3 for men and 24.3 for women. After adjustment for BMI, waist circumfer...

Proceedings ArticleDOI
26 May 2008
TL;DR: BUBBLE is designed and evaluated, a novel social-based forwarding algorithm that utilizes the aforementioned metrics to enhance delivery performance and empirically shows that BUBBLE can substantially improve forwarding performance compared to a number of previously proposed algorithms including the benchmarking history-based PROPHET algorithm, and social- based forwarding SimBet algorithm.
Abstract: In this paper we seek to improve our understanding of human mobility in terms of social structures, and to use these structures in the design of forwarding algorithms for Pocket Switched Networks (PSNs) Taking human mobility traces from the real world, we discover that human interaction is heterogeneous both in terms of hubs (popular individuals) and groups or communities We propose a social based forwarding algorithm, BUBBLE, which is shown empirically to improve the forwarding efficiency significantly compared to oblivious forwarding schemes and to PROPHET algorithm We also show how this algorithm can be implemented in a distributed way, which demonstrates that it is applicable in the decentralised environment of PSNs

Journal ArticleDOI
TL;DR: It is shown that double FYVE domain–containing protein 1, a PI(3)P-binding protein with unusual localization on ER and Golgi membranes, translocates in response to amino acid starvation to a punctate compartment partially colocalized with autophagosomal proteins, which may be involved in Autophagosome biogenesis.
Abstract: Autophagy is the engulfment of cytosol and organelles by double-membrane vesicles termed autophagosomes. Autophagosome formation is known to require phosphatidylinositol 3-phosphate (PI(3)P) and occurs near the endoplasmic reticulum (ER), but the exact mechanisms are unknown. We show that double FYVE domain–containing protein 1, a PI(3)P-binding protein with unusual localization on ER and Golgi membranes, translocates in response to amino acid starvation to a punctate compartment partially colocalized with autophagosomal proteins. Translocation is dependent on Vps34 and beclin function. Other PI(3)P-binding probes targeted to the ER show the same starvation-induced translocation that is dependent on PI(3)P formation and recognition. Live imaging experiments show that this punctate compartment forms near Vps34-containing vesicles, is in dynamic equilibrium with the ER, and provides a membrane platform for accumulation of autophagosomal proteins, expansion of autophagosomal membranes, and emergence of fully formed autophagosomes. This PI(3)P-enriched compartment may be involved in autophagosome biogenesis. Its dynamic relationship with the ER is consistent with the idea that the ER may provide important components for autophagosome formation.

Journal ArticleDOI
Jennifer K. Adelman-McCarthy1, Marcel A. Agüeros2, S. Allam1, S. Allam3  +170 moreInstitutions (65)
TL;DR: The Sixth Data Release of the Sloan Digital Sky Survey (SDS) as discussed by the authors contains images and parameters of roughly 287 million objects over 9583 deg(2), including scans over a large range of Galactic latitudes and longitudes.
Abstract: This paper describes the Sixth Data Release of the Sloan Digital Sky Survey. With this data release, the imaging of the northern Galactic cap is now complete. The survey contains images and parameters of roughly 287 million objects over 9583 deg(2), including scans over a large range of Galactic latitudes and longitudes. The survey also includes 1.27 million spectra of stars, galaxies, quasars, and blank sky ( for sky subtraction) selected over 7425 deg2. This release includes much more stellar spectroscopy than was available in previous data releases and also includes detailed estimates of stellar temperatures, gravities, and metallicities. The results of improved photometric calibration are now available, with uncertainties of roughly 1% in g, r, i, and z, and 2% in u, substantially better than the uncertainties in previous data releases. The spectra in this data release have improved wavelength and flux calibration, especially in the extreme blue and extreme red, leading to the qualitatively better determination of stellar types and radial velocities. The spectrophotometric fluxes are now tied to point-spread function magnitudes of stars rather than fiber magnitudes. This gives more robust results in the presence of seeing variations, but also implies a change in the spectrophotometric scale, which is now brighter by roughly 0.35 mag. Systematic errors in the velocity dispersions of galaxies have been fixed, and the results of two independent codes for determining spectral classifications and red-shifts are made available. Additional spectral outputs are made available, including calibrated spectra from individual 15 minute exposures and the sky spectrum subtracted from each exposure. We also quantify a recently recognized underestimation of the brightnesses of galaxies of large angular extent due to poor sky subtraction; the bias can exceed 0.2 mag for galaxies brighter than r = 14 mag.

Journal ArticleDOI
TL;DR: In this paper, a list of techniques and definitions was generated from techniques published in two systematic reviews, supplemented by "brainstorming" and a systematic search of nine textbooks used in training applied psychologists.
Abstract: Theory provides a helpful basis for designing interventions to change behaviour but offers little guidance on how to do this. This paper aims to illustrate methods for developing an extensive list of behaviour change techniques (with definitions) and for linking techniques to theoretical constructs. A list of techniques and definitions was generated from techniques published in two systematic reviews, supplemented by “brainstorming” and a systematic search of nine textbooks used in training applied psychologists. Inter-rater reliability of extracting the techniques and definitions from the textbooks was assessed. Four experts judged which techniques would be effective in changing 11 theoretical constructs associated with behaviour change. Thirty-five techniques identified in the reviews were extended to 53 by brainstorming and to 137 by consulting textbooks. Agreement for the 53 definitions was 74.7 per cent (15.4% cells completed and 59.3% cells empty for both raters). Agreement about the link between the 35 techniques and theoretical constructs was 71.7 per cent of 385 judgments (12.2% agreement that effective and 59.5% agreement that not effective). This preliminary work demonstrates the possibility of developing a comprehensive, reliable taxonomy of techniques linked to theory. Further refinement is needed to eliminate redundancies, resolve uncertainties, and complete technique definitions.

Journal ArticleDOI
TL;DR: A new compilation of Type Ia supernovae (SNe Ia), a new data set of low-redshift nearby-Hubble-flow SNe, and new analysis procedures to work with these heterogeneous compilations is presented in this article.
Abstract: We present a new compilation of Type Ia supernovae (SNe Ia), a new data set of low-redshift nearby-Hubble-flow SNe, and new analysis procedures to work with these heterogeneous compilations This "Union" compilation of 414 SNe Ia, which reduces to 307 SNe after selection cuts, includes the recent large samples of SNe Ia from the Supernova Legacy Survey and ESSENCE Survey, the older data sets, as well as the recently extended data set of distant supernovae observed with the Hubble Space Telescope (HST) A single, consistent, and blind analysis procedure is used for all the various SN Ia subsamples, and a new procedure is implemented that consistently weights the heterogeneous data sets and rejects outliers We present the latest results from this Union compilation and discuss the cosmological constraints from this new compilation and its combination with other cosmological measurements (CMB and BAO) The constraint we obtain from supernovae on the dark energy density is ΩΛ = 0713+ 0027−0029(stat)+ 0036−0039(sys) , for a flat, ΛCDM universe Assuming a constant equation of state parameter, w, the combined constraints from SNe, BAO, and CMB give w = − 0969+ 0059−0063(stat)+ 0063−0066(sys) While our results are consistent with a cosmological constant, we obtain only relatively weak constraints on a w that varies with redshift In particular, the current SN data do not yet significantly constrain w at z > 1 With the addition of our new nearby Hubble-flow SNe Ia, these resulting cosmological constraints are currently the tightest available


Journal ArticleDOI
Jan Schipper1, Jan Schipper2, Janice Chanson1, Janice Chanson2, Federica Chiozza3, Neil A. Cox1, Neil A. Cox2, Michael R. Hoffmann1, Michael R. Hoffmann2, Vineet Katariya1, John F. Lamoreux4, John F. Lamoreux1, Ana S. L. Rodrigues5, Ana S. L. Rodrigues6, Simon N. Stuart2, Simon N. Stuart1, Helen J. Temple1, Jonathan E. M. Baillie7, Luigi Boitani3, Thomas E. Lacher4, Thomas E. Lacher2, Russell A. Mittermeier, Andrew T. Smith8, Daniel Absolon, John M. Aguiar4, John M. Aguiar2, Giovanni Amori, Noura Bakkour2, Noura Bakkour9, Ricardo Baldi10, Ricardo Baldi11, Richard J. Berridge, Jon Bielby12, Jon Bielby7, Patricia Ann Black13, Julian Blanc, Thomas M. Brooks14, Thomas M. Brooks15, Thomas M. Brooks2, James Burton16, James Burton17, Thomas M. Butynski18, Gianluca Catullo, Roselle Chapman, Zoe Cokeliss7, Ben Collen7, Jim Conroy, Justin Cooke, Gustavo A. B. da Fonseca19, Gustavo A. B. da Fonseca20, Andrew E. Derocher21, Holly T. Dublin, J. W. Duckworth11, Louise H. Emmons22, Richard H. Emslie1, Marco Festa-Bianchet23, Matthew N. Foster, Sabrina Foster24, David L. Garshelis25, C. Cormack Gates26, Mariano Gimenez-Dixon, Susana González, José F. González-Maya, Tatjana C. Good27, Geoffrey Hammerson28, Philip S. Hammond29, D. C. D. Happold30, Meredith Happold30, John Hare, Richard B. Harris31, Clare E. Hawkins32, Clare E. Hawkins15, Mandy Haywood33, Lawrence R. Heaney34, Simon Hedges11, Kristofer M. Helgen22, Craig Hilton-Taylor1, Syed Ainul Hussain35, Nobuo Ishii36, Thomas Jefferson37, Richard K. B. Jenkins38, Charlotte H. Johnston8, Mark Keith39, Jonathan Kingdon40, David Knox2, Kit M. Kovacs41, Kit M. Kovacs42, Penny F. Langhammer8, Kristin Leus43, Rebecca L. Lewison44, Gabriela Lichtenstein, Lloyd F. Lowry45, Zoe Macavoy12, Georgina M. Mace12, David Mallon46, Monica Masi, Meghan W. McKnight, Rodrigo A. Medellín47, Patricia Medici48, G. Mills, Patricia D. Moehlman, Sanjay Molur, Arturo Mora1, Kristin Nowell, John F. Oates49, Wanda Olech, William R.L. Oliver, Monik Oprea22, Bruce D. Patterson34, William F. Perrin37, Beth Polidoro1, Caroline M. Pollock1, Abigail Powel50, Yelizaveta Protas9, Paul A. Racey38, Jim Ragle1, Pavithra Ramani24, Galen B. Rathbun51, Randall R. Reeves, Stephen B. Reilly37, John E. Reynolds52, Carlo Rondinini3, Ruth Grace Rosell-Ambal2, Monica Rulli, Anthony B. Rylands, Simona Savini, Cody J. Schank24, Wes Sechrest24, Caryn Self-Sullivan, Alan Shoemaker1, Claudio Sillero-Zubiri40, Naamal De Silva, David E. Smith24, Chelmala Srinivasulu53, P. J. Stephenson, Nico van Strien54, Bibhab Kumar Talukdar55, Barbara L. Taylor37, Rob Timmins, Diego G. Tirira, Marcelo F. Tognelli56, Marcelo F. Tognelli10, Katerina Tsytsulina, Liza M. Veiga57, Jean-Christophe Vié1, Elizabeth A. Williamson58, Sarah A. Wyatt, Yan Xie, Bruce E. Young28 
International Union for Conservation of Nature and Natural Resources1, Conservation International2, Sapienza University of Rome3, Texas A&M University4, Instituto Superior Técnico5, University of Cambridge6, Zoological Society of London7, Arizona State University8, Columbia University9, National Scientific and Technical Research Council10, Wildlife Conservation Society11, Imperial College London12, National University of Tucumán13, University of the Philippines Los Baños14, University of Tasmania15, University of Edinburgh16, Earthwatch Institute17, Drexel University18, Global Environment Facility19, Universidade Federal de Minas Gerais20, University of Alberta21, Smithsonian Institution22, Université de Sherbrooke23, University of Virginia24, Minnesota Department of Natural Resources25, University of Calgary26, James Cook University27, NatureServe28, University of St Andrews29, Australian National University30, University of Montana31, General Post Office32, University of Otago33, Field Museum of Natural History34, Wildlife Institute of India35, Tokyo Woman's Christian University36, National Oceanic and Atmospheric Administration37, University of Aberdeen38, University of the Witwatersrand39, University of Oxford40, Norwegian Polar Institute41, University Centre in Svalbard42, Copenhagen Zoo43, San Diego State University44, University of Alaska Fairbanks45, Manchester Metropolitan University46, National Autonomous University of Mexico47, University of Kent48, City University of New York49, Victoria University of Wellington50, California Academy of Sciences51, Mote Marine Laboratory52, Osmania University53, White Oak Conservation54, Aaranyak55, University of California, Davis56, Museu Paraense Emílio Goeldi57, University of Stirling58
10 Oct 2008-Science
TL;DR: In this paper, the authors present a comprehensive assessment of the conservation status and distribution of the world's mammals, including marine mammals, using data collected by 1700+ experts, covering all 5487 species.
Abstract: Knowledge of mammalian diversity is still surprisingly disparate, both regionally and taxonomically. Here, we present a comprehensive assessment of the conservation status and distribution of the world's mammals. Data, compiled by 1700+ experts, cover all 5487 species, including marine mammals. Global macroecological patterns are very different for land and marine species but suggest common mechanisms driving diversity and endemism across systems. Compared with land species, threat levels are higher among marine mammals, driven by different processes (accidental mortality and pollution, rather than habitat loss), and are spatially distinct (peaking in northern oceans, rather than in Southeast Asia). Marine mammals are also disproportionately poorly known. These data are made freely available to support further scientific developments and conservation action.

Journal ArticleDOI
TL;DR: The HI Nearby Galaxy Survey (THINGS) as discussed by the authors is a high spectral (≤52kms −1 ) and spatial (∼ 6 ′′ ) resolution survey of HI emission in 34 nearby galaxies obtained using the NRAO Very Large Array (VLA).
Abstract: We present “The HI Nearby Galaxy Survey (THINGS)”, a high spectral (≤52kms −1 ) and spatial (∼ 6 ′′ ) resolution survey of HI emission in 34 nearby galaxies obtained using the NRAO Very Large Array (VLA) The overarching scientific goal of THINGS is to investigate fundamental characteristics of the interstellar medium (ISM) related to galaxy morphology, star formation and mass distribution across the Hubble sequence Unique characteristics of the THINGS database are the homogeneous sensitivity as well as spatial and velocity resolution of the HI data which is at the limit of what can be achieved with the VLA for a significant number of galaxies A sample of 34 objects at distances 2 < D <15 Mpc (resulting in linear resolutions of ∼100 to 500pc) are targeted in THINGS, covering a wide range of star formation rates (∼ 10 −3 to 6 M⊙ yr −1 ), total HI masses MHI (001 to 14×10 9 M⊙), absolute luminosities MB (–115 to –217mag) and metallicities (75 to 92 in units of 12+log[O/H]) We describe the setup of the VLA observations, the data reduction procedures and the creation of the final THINGS data products We present an atlas of the integrated HI maps, the velocity fields, the second moment (velocity dispersion) maps and individual channel maps of each THINGS galaxy The THINGS data products are made publicly available through a dedicated webpage Accompanying THINGS papers address issues such as the small–scale structure of the ISM, the (dark) matter distribution in THINGS galaxies, and the processes leading to star formation Subject headings: surveys — galaxies: structure — galaxies: ISM — ISM: general — ISM: atoms — radio lines: galaxies

Journal ArticleDOI
TL;DR: A large proportion of hospitalised patients are at risk for VTE, but there is a low rate of appropriate prophylaxis, which reinforces the rationale for the use of hospital-wide strategies to assess patients' VTE risk and to implement measures that ensure that at-risk patients receive appropriate proPHylaxis.

Journal ArticleDOI
TL;DR: The mechanisms of remyelination provide critical clues for regeneration biologists that help them to determine why remYelination fails in MS and in other demyelinating diseases and how it might be enhanced therapeutically.
Abstract: Remyelination involves reinvesting demyelinated axons with new myelin sheaths. In stark contrast to the situation that follows loss of neurons or axonal damage, remyelination in the CNS can be a highly effective regenerative process. It is mediated by a population of precursor cells called oligodendrocyte precursor cells (OPCs), which are widely distributed throughout the adult CNS. However, despite its efficiency in experimental models and in some clinical diseases, remyelination is often inadequate in demyelinating diseases such as multiple sclerosis (MS), the most common demyelinating disease and a cause of neurological disability in young adults. The failure of remyelination has profound consequences for the health of axons, the progressive and irreversible loss of which accounts for the progressive nature of these diseases. The mechanisms of remyelination therefore provide critical clues for regeneration biologists that help them to determine why remyelination fails in MS and in other demyelinating diseases and how it might be enhanced therapeutically.

Journal ArticleDOI
TL;DR: The evidence that impulsivity is associated with addiction vulnerability is reviewed by considering three lines of evidence: studies of groups at high-risk for development of SUDs; studies of pathological gamblers, where the harmful consequences of the addiction on brain structure are minimised, and genetic association studies linking impulsivity to genetic risk factors for addiction.

Journal ArticleDOI
TL;DR: In this paper, bias-adjusted normal approximation versions of Lagrange multiplier (NLM) test of error cross section independence of Breusch and Pagan (1980) in the case of panel models with strictly exogenous regressors and normal errors were proposed.
Abstract: This paper proposes bias-adjusted normal approximation versions of Lagrange multiplier (NLM) test of error cross section independence of Breusch and Pagan (1980) in the case of panel models with strictly exogenous regressors and normal errors. The exact mean and variance of the Lagrange multiplier (LM) test statistic are provided for the purpose of the bias-adjustments, and it is shown that the proposed tests have a standard normal distribution for the fixed time series dimension (T) as the cross section dimension (N) tends to infinity. Importantly, the proposed bias-adjusted NLM tests are consistent even when the Pesaran’s (2004) CD test is inconsistent. Also alternative bias-adjusted NLM tests, which are consistent under local error cross section independence of any fixed order p, are proposed. The finite sample behavior of the proposed tests are investigated and compared to the LM, NLM, and CD tests. It is shown that the bias-adjusted NLM tests successfully control the size, maintaining satisfactory power in panel with exogenous regressors and normal errors, even when cross section mean of the factor loadings is close to zero, where the CD test has little power. However, it is also shown that the bias-adjusted NLM tests are not as robust as the CD test to non-normal errors and/or in the presence of weakly exogenous regressors.

Journal ArticleDOI
TL;DR: In this article, the authors present empirical evidence on the range and extent of servitization in manufacturing, which suggests that manufacturing firms in developed economies are adopting a range of service-oriented strategies.
Abstract: Commentators suggest that to survive in developed economies manufacturing firms have to move up the value chain, innovating and creating ever more sophisticated products and services, so they do not have to compete on the basis of cost. While this strategy is proving increasingly popular with policy makers and academics there is limited empirical evidence to explore the extent to which it is being adopted in practice. And if so, what the impact of this servitization of manufacturing might be. This paper seeks to fill a gap in the literature by presenting empirical evidence on the range and extent of servitization. Data are drawn from the OSIRIS database on 10,028 firms incorporated in 25 different countries. The paper presents an analysis of these data which suggests that: [i] manufacturing firms in developed economies are adopting a range of servitization strategies—12 separate approaches to servitization are identified; [ii] these 12 categories can be used to extend the traditional three options for servitization—product oriented Product–Service Systems, use oriented Product–Service Systems and result oriented Product–Service Systems, by adding two new categories “integration oriented Product–Service Systems” and “service oriented Product–Service Systems”; [iii] while the manufacturing firms that have servitized are larger than traditional manufacturing firms in terms of sales revenues, at the aggregate level they also generate lower profits as a % of sales; [iv] these findings are moderated by firm size (measured in terms of numbers of employees). In smaller firms servitization appears to pay off while in larger firms it proves more problematic; and [v] there are some hidden risks associated with servitization—the sample contains a greater proportion of bankrupt servitized firms than would be expected.