scispace - formally typeset
Topic

CUDA

About: CUDA is a(n) research topic. Over the lifetime, 10942 publication(s) have been published within this topic receiving 218748 citation(s). The topic is also known as: Compute Unified Device Architecture & Nvidia CUDA.

...read more

Papers
  More

Open accessPosted Content
Yangqing Jia1, Evan Shelhamer2, Jeff Donahue2, Sergey Karayev2  +4 moreInstitutions (2)
Abstract: Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models. The framework is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures. Caffe fits industry and internet-scale media needs by CUDA GPU computation, processing over 40 million images a day on a single K40 or Titan GPU ($\approx$ 2.5 ms per image). By separating model representation from actual implementation, Caffe allows experimentation and seamless switching among platforms for ease of development and deployment from prototyping machines to cloud environments. Caffe is maintained and developed by the Berkeley Vision and Learning Center (BVLC) with the help of an active community of contributors on GitHub. It powers ongoing research projects, large-scale industrial applications, and startup prototypes in vision, speech, and multimedia.

...read more

Topics: Deep learning (52%), CUDA (51%)

12,530 Citations


Proceedings ArticleDOI: 10.1145/2647868.2654889
Yangqing Jia1, Evan Shelhamer2, Jeff Donahue2, Sergey Karayev2  +4 moreInstitutions (2)
03 Nov 2014-
Abstract: Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models. The framework is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures. Caffe fits industry and internet-scale media needs by CUDA GPU computation, processing over 40 million images a day on a single K40 or Titan GPU (approx 2 ms per image). By separating model representation from actual implementation, Caffe allows experimentation and seamless switching among platforms for ease of development and deployment from prototyping machines to cloud environments.Caffe is maintained and developed by the Berkeley Vision and Learning Center (BVLC) with the help of an active community of contributors on GitHub. It powers ongoing research projects, large-scale industrial applications, and startup prototypes in vision, speech, and multimedia.

...read more

Topics: Deep learning (53%), Theano (52%), CUDA (51%)

9,244 Citations


Open access
01 Jan 2019-
Abstract: The Portable, Extensible Toolkit for Scientific Computation (PETSc), is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations. It supports MPI, and GPUs through CUDA or OpenCL, as well as hybrid MPI-GPU parallelism. PETSc (sometimes called PETSc/Tao) also contains the Tao optimization software library.

...read more

Topics: CUDA (54%)

2,403 Citations


Proceedings ArticleDOI: 10.1145/1401132.1401152
11 Aug 2008-
Abstract: The advent of multicore CPUs and manycore GPUs means that mainstream processor chips are now parallel systems. Furthermore, their parallelism continues to scale with Moore's law. The challenge is to develop mainstream application software that transparently scales its parallelism to leverage the increasing number of processor cores, much as 3D graphics applications transparently scale their parallelism to manycore GPUs with widely varying numbers of cores.

...read more

Topics: Data parallelism (64%), Task parallelism (63%), Multi-core processor (54%) ...read more

2,143 Citations


Open accessBook
31 Dec 2012-
Abstract: Programming Massively Parallel Processors: A Hands-on Approach shows both student and professional alike the basic concepts of parallel programming and GPU architecture. Various techniques for constructing parallel programs are explored in detail. Case studies demonstrate the development process, which begins with computational thinking and ends with effective and efficient parallel programs. Topics of performance, floating-point format, parallel patterns, and dynamic parallelism are covered in depth. This best-selling guide to CUDA and GPU parallel programming has been revised with more parallel programming examples, commonly-used libraries such as Thrust, and explanations of the latest tools. With these improvements, the book retains its concise, intuitive, practical approach based on years of road-testing in the authors' own parallel computing courses. Updates in this new edition include: New coverage of CUDA 5.0, improved performance, enhanced development tools, increased hardware support, and more Increased coverage of related technology, OpenCL and new material on algorithm patterns, GPU clusters, host programming, and data parallelism Two new case studies (on MRI reconstruction and molecular visualization) explore the latest applications of CUDA and GPUs for scientific research and high-performance computing Table of Contents 1 Introduction 2 History of GPU Computing 3 Introduction to Data Parallelism and CUDA C 4 Data-Parallel Execution Model 5 CUDA Memories 6 Performance Considerations 7 Floating-Point Considerations 8 Parallel Patterns: Convolutions 9 Parallel Patterns: Prefix Sum 10 Parallel Patterns: Sparse Matrix-Vector Multiplication 11 Application Case Study: Advanced MRI Reconstruction 12 Application Case Study: Molecular Visualization and Analysis 13 Parallel Programming and Computational Thinking 14 An Introduction to OpenCL 15 Parallel Programming with OpenACC 16 Thrust: A Productivity-Oriented Library for CUDA 17 CUDA FORTRAN 18 An Introduction to C++ AMP 19 Programming a Heterogeneous Computing Cluster 20 CUDA Dynamic Parallelism 21 Conclusions and Future Outlook Appendix A: Matrix Multiplication Host-Only Version Source Code Appendix B: GPU Compute Capabilities

...read more

Topics: CUDA (66%), Parallel programming model (64%), Data parallelism (64%) ...read more

1,591 Citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20227
2021358
2020501
2019508
2018543
2017687

Top Attributes

Show by:

Topic's top 5 most impactful authors

Wen-mei W. Hwu

53 papers, 4.8K citations

Jack Dongarra

22 papers, 790 citations

Koji Nakano

19 papers, 395 citations

Sergei Gorlatch

17 papers, 301 citations

Bertil Schmidt

17 papers, 852 citations

Network Information
Related Topics (5)
Speedup

23.6K papers, 390K citations

94% related
Parallel algorithm

23.6K papers, 452.6K citations

92% related
Shared memory

18.7K papers, 355.1K citations

90% related
Scalability

50.9K papers, 931.6K citations

90% related
Grid computing

20.3K papers, 352.7K citations

89% related