scispace - formally typeset
Search or ask a question
Institution

Pacific Northwest National Laboratory

FacilityRichland, Washington, United States
About: Pacific Northwest National Laboratory is a facility organization based out in Richland, Washington, United States. It is known for research contribution in the topics: Catalysis & Aerosol. The organization has 11581 authors who have published 27934 publications receiving 1120489 citations. The organization is also known as: PNL & PNNL.
Topics: Catalysis, Aerosol, Mass spectrometry, Ion, Adsorption


Papers
More filters
Journal ArticleDOI
TL;DR: DAnTE features selected normalization methods, missing value imputation algorithms, peptide-to-protein rollup methods, an extensive array of plotting functions and a comprehensive hypothesis-testing scheme that can handle unbalanced data and random effects.
Abstract: Summary: Data Analysis Tool Extension (DAnTE ) is a statistical tool designed to address challenges associated with quantitative bottomup, shotgun proteomics data. This tool has also been demonstrated for microarray data and can easily be extended to other highthroughput data types. DAnTE features selected normalization methods, missing value imputation algorithms, peptide-to-protein rollup methods, an extensive array of plotting functions and a comprehensive hypothesis-testing scheme that can handle unbalanced data and random effects. The graphical user interface (GUI) is designed to be very intuitive and user friendly. Availability: DAnTE may be downloaded free of charge at http://omics.pnl.gov/software/

396 citations

Journal ArticleDOI
TL;DR: A comprehensive review of the overlapping domains of the Sensor Web, citizen sensing and human-in-the-loop sensing in the era of Mobile and Social Web, and the roles these domains can play in environmental and public health surveillance and crisis/disaster informatics can be found in this article.
Abstract: 'Wikification of GIS by the masses' is a phrase-term first coined by Kamel Boulos in 2005, two years earlier than Goodchild's term 'Volunteered Geographic Information'. Six years later (2005-2011), OpenStreetMap and Google Earth (GE) are now full-fledged, crowdsourced 'Wikipedias of the Earth' par excellence, with millions of users contributing their own layers to GE, attaching photos, videos, notes and even 3-D (three dimensional) models to locations in GE. From using Twitter in participatory sensing and bicycle-mounted sensors in pervasive environmental sensing, to creating a 100,000-sensor geo-mashup using Semantic Web technology, to the 3-D visualisation of indoor and outdoor surveillance data in real-time and the development of next-generation, collaborative natural user interfaces that will power the spatially-enabled public health and emergency situation rooms of the future, where sensor data and citizen reports can be triaged and acted upon in real-time by distributed teams of professionals, this paper offers a comprehensive state-of-the-art review of the overlapping domains of the Sensor Web, citizen sensing and 'human-in-the-loop sensing' in the era of the Mobile and Social Web, and the roles these domains can play in environmental and public health surveillance and crisis/disaster informatics. We provide an in-depth review of the key issues and trends in these areas, the challenges faced when reasoning and making decisions with real-time crowdsourced data (such as issues of information overload, "noise", misinformation, bias and trust), the core technologies and Open Geospatial Consortium (OGC) standards involved (Sensor Web Enablement and Open GeoSMS), as well as a few outstanding project implementation examples from around the world.

395 citations

Journal ArticleDOI
TL;DR: For example, this article found that ocean surface temperatures together with changes in greenhouse gases did not induce a substantial reduction in summertime precipitation over the central Great Plains during 2012, but such an extreme drought event was still a rare occurrence within the spread of 2012 climate model simulations.
Abstract: Central Great Plains precipitation deficits during May-August 2012 were the most severe since at least 1895, eclipsing the Dust Bowl summers of 1934 and 1936. Drought developed suddenly in May, following near-normal precipitation during winter and early spring. Its proximate causes were a reduction in atmospheric moisture transport into the Great Plains from the Gulf of Mexico. Processes that generally provide air mass lift and condensation were mostly absent, including a lack of frontal cyclones in late spring followed by suppressed deep convection in summer owing to large-scale subsidence and atmospheric stabilization. Seasonal forecasts did not predict the summer 2012 central Great Plains drought development, which therefore arrived without early warning. Climate simulations and empirical analysis suggest that ocean surface temperatures together with changes in greenhouse gases did not induce a substantial reduction in summertime precipitation over the central Great Plains during 2012. Yet, diagnosis of the retrospective climate simulations also reveals a regime shift toward warmer and drier summertime Great Plains conditions during the recent decade, most probably due to natural decadal variability. As a consequence, the probability for severe summer Great Plains drought may have increased in the last decade compared to the 1980s and 1990s, and the so-called tail-risk for severe drought may have been heightened in summer 2012. Such an extreme drought event was nonetheless still found to be a rare occurrence within the spread of 2012 climate model simulations. Implications of this study's findings for U.S. seasonal drought forecasting are discussed.

393 citations

Journal ArticleDOI
TL;DR: The Qiita web platform provides access to large amounts of public microbial multi-omic data and enables easy analysis and meta-analysis of standardized private and public data.
Abstract: Multi-omic insights into microbiome function and composition typically advance one study at a time. However, in order for relationships across studies to be fully understood, data must be aggregated into meta-analyses. This makes it possible to generate new hypotheses by finding features that are reproducible across biospecimens and data layers. Qiita dramatically accelerates such integration tasks in a web-based microbiome-comparison platform, which we demonstrate with Human Microbiome Project and Integrative Human Microbiome Project (iHMP) data.

393 citations

Journal ArticleDOI
TL;DR: In this paper, the advantages of both single-layer graphene and highly graphitic carbon were reported as a durable alternative support material for Pt nanoparticles for oxygen reduction in fuel cells.

393 citations


Authors

Showing all 11848 results

NameH-indexPapersCitations
Yi Cui2201015199725
Derek R. Lovley16858295315
Xiaoyuan Chen14999489870
Richard D. Smith140118079758
Taeghwan Hyeon13956375814
Jun Liu13861677099
Federico Capasso134118976957
Jillian F. Banfield12756260687
Mary M. Horowitz12755756539
Frederick R. Appelbaum12767766632
Matthew Jones125116196909
Rainer Storb12390558780
Zhifeng Ren12269571212
Wei Chen122194689460
Thomas E. Mallouk12254952593
Network Information
Related Institutions (5)
ETH Zurich
122.4K papers, 5.1M citations

91% related

Centre national de la recherche scientifique
382.4K papers, 13.6M citations

91% related

Georgia Institute of Technology
119K papers, 4.6M citations

90% related

Tsinghua University
200.5K papers, 4.5M citations

90% related

Pennsylvania State University
196.8K papers, 8.3M citations

90% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023130
2022459
20211,794
20201,795
20191,598
20181,619