scispace - formally typeset
M

Michael Garland

Researcher at Nvidia

Publications -  131
Citations -  18564

Michael Garland is an academic researcher from Nvidia. The author has contributed to research in topics: CUDA & Polygon mesh. The author has an hindex of 50, co-authored 120 publications receiving 17536 citations. Previous affiliations of Michael Garland include Carnegie Mellon University & University of Virginia.

Papers
More filters
Proceedings ArticleDOI

Mesh modelling with curve analogies

TL;DR: This paper uses curve analogies to produce changes in meshes to avoid the difficulty of specifying fully 3D example meshes, applied to families of curves on an object's surface, and uses the filtered curves to drive a transformation of the object.
Patent

System, method, and computer program product for assigning elements of a matrix to processing threads with increased contiguousness

TL;DR: In this paper, a system, method, and computer program product are provided for assigning elements of a matrix to processing threads, utilizing an algorithm that increases a contiguousness of the elements being processed by each thread.
Posted Content

A Programmable Approach to Model Compression.

TL;DR: A programmable system for model compression called Condensa, which uses a novel sample-efficient constrained Bayesian optimization algorithm to automatically infer desirable sparsity ratios given a strategy and a user-provided objective.
Patent

Universal data pipeline

TL;DR: A history preserving data pipeline as mentioned in this paper is a system that provides immutable and versioned datasets, which makes it possible to determine the data in a dataset at a point in time in the past, even if that data is no longer in the current version of the dataset.
Posted Content

Accelerating Reinforcement Learning through GPU Atari Emulation

TL;DR: The CUDA Learning Environment (CULE) as mentioned in this paper is a CUDA port of the Atari Learning Environment, which is used for the development of deep reinforcement algorithms and overcomes many limitations of existing CPU-based emulators and scales naturally to multiple GPUs.