Institution
Pittsburgh Supercomputing Center
Facility•Pittsburgh, Pennsylvania, United States•
About: Pittsburgh Supercomputing Center is a facility organization based out in Pittsburgh, Pennsylvania, United States. It is known for research contribution in the topics: Population & Ab initio. The organization has 170 authors who have published 395 publications receiving 20948 citations.
Topics: Population, Ab initio, Vaccination, Influenza vaccine, Health care
Papers published on a yearly basis
Papers
More filters
••
Broad Institute1, Commonwealth Scientific and Industrial Research Organisation2, Massachusetts Institute of Technology3, Hebrew University of Jerusalem4, Science for Life Laboratory5, Pittsburgh Supercomputing Center6, Oklahoma State University–Stillwater7, Griffith University8, University of Wisconsin-Madison9, Dresden University of Technology10, California Institute for Quantitative Biosciences11, Flanders Institute for Biotechnology12, Parco Tecnologico Padano13, United States Department of Agriculture14, Purdue University15, Indiana University16
TL;DR: This protocol provides a workflow for genome-independent transcriptome analysis leveraging the Trinity platform and presents Trinity-supported companion utilities for downstream applications, including RSEM for transcript abundance estimation, R/Bioconductor packages for identifying differentially expressed transcripts across samples and approaches to identify protein-coding genes.
Abstract: De novo assembly of RNA-seq data enables researchers to study transcriptomes without the need for a genome sequence; this approach can be usefully applied, for instance, in research on 'non-model organisms' of ecological and evolutionary importance, cancer samples or the microbiome. In this protocol we describe the use of the Trinity platform for de novo transcriptome assembly from RNA-seq data in non-model organisms. We also present Trinity-supported companion utilities for downstream applications, including RSEM for transcript abundance estimation, R/Bioconductor packages for identifying differentially expressed transcripts across samples and approaches to identify protein-coding genes. In the procedure, we provide a workflow for genome-independent transcriptome analysis leveraging the Trinity platform. The software, documentation and demonstrations are freely available from http://trinityrnaseq.sourceforge.net. The run time of this protocol is highly dependent on the size and complexity of data to be analyzed. The example data set analyzed in the procedure detailed herein can be processed in less than 5 h.
6,369 citations
••
01 Sep 2014
TL;DR: XSEDE's integrated, comprehensive suite of advanced digital services federates with other high-end facilities and with campus-based resources, serving as the foundation for a national e-science infrastructure ecosystem.
Abstract: Computing in science and engineering is now ubiquitous: digital technologies underpin, accelerate, and enable new, even transformational, research in all domains. Access to an array of integrated and well-supported high-end digital services is critical for the advancement of knowledge. Driven by community needs, the Extreme Science and Engineering Discovery Environment (XSEDE) project substantially enhances the productivity of a growing community of scholars, researchers, and engineers (collectively referred to as "scientists"' throughout this article) through access to advanced digital services that support open research. XSEDE's integrated, comprehensive suite of advanced digital services federates with other high-end facilities and with campus-based resources, serving as the foundation for a national e-science infrastructure ecosystem. XSEDE's e-science infrastructure has tremendous potential for enabling new advancements in research and education. XSEDE's vision is a world of digitally enabled scholars, researchers, and engineers participating in multidisciplinary collaborations to tackle society's grand challenges.
2,856 citations
••
01 Jul 1997TL;DR: A performance model for the TCP Congestion Avoidance algorithm that predicts the bandwidth of a sustained TCP connection subjected to light to moderate packet losses, such as loss caused by network congestion is analyzed.
Abstract: In this paper, we analyze a performance model for the TCP Congestion Avoidance algorithm. The model predicts the bandwidth of a sustained TCP connection subjected to light to moderate packet losses, such as loss caused by network congestion. It assumes that TCP avoids retransmission timeouts and always has sufficient receiver window and sender data. The model predicts the Congestion Avoidance performance of nearly all TCP implementations under restricted conditions and of TCP with Selective Acknowledgements over a much wider range of Internet conditions.We verify the model through both simulation and live Internet measurements. The simulations test several TCP implementations under a range of loss conditions and in environments with both drop-tail and RED queuing. The model is also compared to live Internet measurements using the TReno diagnostic and real TCP implementations.We also present several applications of the model to problems of bandwidth allocation in the Internet. We use the model to analyze networks with multiple congested gateways; this analysis shows strong agreement with prior work in this area. Finally, we present several important implications about the behavior of the Internet in the presence of high load from diverse user communities.
1,580 citations
••
TL;DR: This work used two-photon calcium imaging to characterize a functional property—the preferred stimulus orientation—of a group of neurons in the mouse primary visual cortex and large-scale electron microscopy of serial thin sections was used to trace a portion of these neurons’ local network.
Abstract: In the cerebral cortex, local circuits consist of tens of thousands of neurons, each of which makes thousands of synaptic connections. Perhaps the biggest impediment to understanding these networks is that we have no wiring diagrams of their interconnections. Even if we had a partial or complete wiring diagram, however, understanding the network would also require information about each neuron's function. Here we show that the relationship between structure and function can be studied in the cortex with a combination of in vivo physiology and network anatomy. We used two-photon calcium imaging to characterize a functional property--the preferred stimulus orientation--of a group of neurons in the mouse primary visual cortex. Large-scale electron microscopy of serial thin sections was then used to trace a portion of these neurons' local network. Consistent with a prediction from recent physiological experiments, inhibitory interneurons received convergent anatomical input from nearby excitatory neurons with a broad range of preferred orientations, although weak biases could not be rejected.
908 citations
••
Bielefeld University1, BRICS2, University of Düsseldorf3, Oregon State University4, University of California, San Diego5, Aarhus University6, University of Copenhagen7, Roskilde University8, Joint Genome Institute9, Pittsburgh Supercomputing Center10, Saint Petersburg State University11, Max Planck Society12, University of Vienna13, University of Technology, Sydney14, Centre national de la recherche scientifique15, Genome Institute of Singapore16, University of Warwick17, University of Tübingen18, Intel19, French Institute for Research in Computer Science and Automation20, Taipei Medical University21, Joint BioEnergy Institute22, Lawrence Berkeley National Laboratory23, Georgia Institute of Technology24, University of Calgary25, University of Göttingen26, National Health Research Institutes27, San Diego State University28, Boyce Thompson Institute for Plant Research29, Robert Koch Institute30, Coordenadoria de Aperfeiçoamento de Pessoal de Nível Superior31, University of Maryland, College Park32, Newcastle University33, Leibniz Association34, ETH Zurich35
TL;DR: The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark their programs on highly complex and realistic data sets, generated from ∼700 newly sequenced microorganisms and ∼600 novel viruses and plasmids and representing common experimental setups as discussed by the authors.
Abstract: Methods for assembly, taxonomic profiling and binning are key to interpreting metagenome data, but a lack of consensus about benchmarking complicates performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark their programs on highly complex and realistic data sets, generated from ∼700 newly sequenced microorganisms and ∼600 novel viruses and plasmids and representing common experimental setups. Assembly and genome binning programs performed well for species represented by individual genomes but were substantially affected by the presence of related strains. Taxonomic profiling and binning programs were proficient at high taxonomic ranks, with a notable performance decrease below family level. Parameter settings markedly affected performance, underscoring their importance for program reproducibility. The CAMI results highlight current challenges but also provide a roadmap for software selection to answer specific research questions.
593 citations
Authors
Showing all 171 results
Name | H-index | Papers | Citations |
---|---|---|---|
Yang Wang | 67 | 437 | 16588 |
David N. Beratan | 64 | 280 | 16118 |
Thomas E. Cheatham | 58 | 156 | 32976 |
Bruce Y. Lee | 52 | 299 | 9845 |
Michael F. Crowley | 45 | 137 | 5871 |
Bruce A. Armitage | 44 | 125 | 6140 |
Jeffry D. Madura | 39 | 143 | 38515 |
Shawn T. Brown | 37 | 136 | 10165 |
Jeffrey P. Gardner | 28 | 57 | 3472 |
Glenna So Ming Tong | 26 | 52 | 2494 |
Jeffrey D. Evanseck | 25 | 62 | 14848 |
Roberto Gomez | 24 | 46 | 1395 |
Nigel Goddard | 24 | 71 | 2220 |
Matt Mathis | 22 | 38 | 6008 |
Joel Welling | 21 | 42 | 1349 |