scispace - formally typeset
A

Aaron Walsman

Researcher at University of Washington

Publications -  20
Citations -  1764

Aaron Walsman is an academic researcher from University of Washington. The author has contributed to research in topics: Computer science & Pose. The author has an hindex of 8, co-authored 17 publications receiving 1055 citations. Previous affiliations of Aaron Walsman include Carnegie Mellon University.

Papers
More filters
Proceedings ArticleDOI

The YCB object and Model set: Towards common benchmarks for manipulation research

TL;DR: The Yale-CMU-Berkeley (YCB) Object and Model set is intended to be used for benchmarking in robotic grasping and manipulation research, and provides high-resolution RGBD scans, physical properties and geometric models of the objects for easy incorporation into manipulation and planning software platforms.
Journal ArticleDOI

Benchmarking in Manipulation Research: Using the Yale-CMU-Berkeley Object and Model Set

TL;DR: The Yale-Carnegie Mellon University-Berkeley object and model set is presented, intended to be used to facilitate benchmarking in robotic manipulation research and to enable the community of manipulation researchers to more easily compare approaches and continually evolve standardized benchmarking tests and metrics as the field matures.
Journal ArticleDOI

Yale-CMU-Berkeley dataset for robotic manipulation research:

TL;DR: An image and model dataset of the real-life objects from the Yale-CMU-Berkeley Object Set, which is specifically designed for benchmarking in manipulation research, is presented.
Journal ArticleDOI

Benchmarking in Manipulation Research: The YCB Object and Model Set and Benchmarking Protocols.

TL;DR: The Yale-CMU-Berkeley Object and Model Set (YCB) as mentioned in this paper is a set of objects and models designed to cover a wide range of aspects of the manipulation problem, including objects of daily life with different shapes, sizes, textures, weight and rigidity.
Posted Content

CHALET: Cornell House Agent Learning Environment.

TL;DR: The environment and actions available are designed to create a challenging domain to train and evaluate autonomous agents, including for tasks that combine language, vision, and planning in a dynamic environment.