scispace - formally typeset
Search or ask a question
Author

Ethan Rublee

Other affiliations: Willow Garage
Bio: Ethan Rublee is an academic researcher from Google. The author has contributed to research in topics: Object (computer science) & Suction. The author has an hindex of 18, co-authored 29 publications receiving 7567 citations. Previous affiliations of Ethan Rublee include Willow Garage.

Papers
More filters
Patent
John Zevenbergen1, Ethan Rublee1
16 Jul 2014
TL;DR: In this article, the authors describe a control system for a suction gripper with multiple suction cups, where a sensor is positioned between the vacuum pump and a suctionscupper and measures the vacuum pressure of the suction cup.
Abstract: Example embodiments may provide for control of a suction gripper with multiple suction cups. One example system includes a suction gripper and a control system. The suction gripper may include a vacuum pump, a plurality of suction cups coupled to the vacuum pump, and a plurality of sensors corresponding to the suction cups, where a sensor is positioned between the vacuum pump and a suction cup and measures a vacuum pressure of the suction cup. The control system, may be configured to activate the vacuum pump to cause the suction gripper to apply suction to an object through one or more active suction cups, receive sensor data indicative of the vacuum pressure of the one or more active suction cups from the corresponding sensors, identify at least one suction cup to deactivate from the one or more active suction cups, and deactivate hie at least one identified suction cup.

35 citations

Patent
07 Oct 2015
TL;DR: In this article, the first data channel between the mobile robotic device and the first battery pack is established, and the data acquired by the mobile robot during operation is transferred to a local data storage component of the battery pack.
Abstract: An example method includes receiving, by a mobile robotic device, power from a battery of a first battery pack in order to operate within an environment. The method further includes establishing a first data channel between the mobile robotic device and the first battery pack. The method also includes using the first data channel to transfer sensor data acquired by the mobile robotic device during operation to a local data storage component of the first battery pack. The method additionally includes navigating, by the mobile robotic device, to a battery exchange station to transfer the first battery pack containing the battery and the local data storage component with the sensor data to the battery exchange station. The method further includes after transferring the first battery pack to the battery exchange station, receiving a second battery pack from the battery exchange station to continue operation within the environment.

34 citations

Patent
14 Mar 2014
TL;DR: In this article, a robotic manipulator may move at least one physical object through a designated area in space and, based on the determined location, scan the machine-readable code so as to determine information associated with the at least 1 physical object encoded in the code.
Abstract: Methods and systems for recognizing machine-readable information on three-dimensional (3D) objects are described. A robotic manipulator may move at least one physical object through a designated area in space. As the at least one physical object is being moved through the designated area, one or more optical sensors may determine a location of a machine-readable code on the at least one physical object and, based on the determined location, scan the machine-readable code so as to determine information associated with the at least one physical object encoded in the machine-readable code. Based on the information associated with the at least one physical object, a computing device may then determine a respective location in a physical environment of the robotic manipulator at which to place the at least one physical object. The robotic manipulator may then be directed to place the at least one physical object at the respective location.

33 citations

Patent
13 Jul 2017
TL;DR: In this paper, a control system for a heterogeneous fleet of robotic devices for collaborative object processing in an environment, such as a warehouse, is presented. But the control system may be configured to deliver one or more of the plurality of mobile robotic devices to at least one location within the area of reach of the fixed robotic manipulator.
Abstract: Example systems and methods may provide for a heterogeneous fleet of robotic devices for collaborative object processing in an environment, such as a warehouse. An example system includes a plurality of mobile robotic devices configured to transport one or more objects within an environment, a fixed robotic manipulator positioned within the environment that is configured to manipulate one or more objects within an area of reach of the fixed robotic manipulator, and a control system. The control system may be configured to cause one or more of the plurality of mobile robotic devices to deliver at least one object to at least one location within the area of reach of the fixed robotic manipulator, and to cause the fixed robotic manipulator to distribute the at least one object to a different one or more of the plurality of mobile robotic devices for delivery to one or more other locations within the environment.

32 citations

Patent
10 Dec 2015
TL;DR: In this paper, the authors present a system that allows dynamic updating of a plan to move objects using a robotic device using updated sensor data from the one or more sensors after the robotic manipulator performs the first action.
Abstract: Example systems and methods allow for dynamic updating of a plan to move objects using a robotic device. One example method includes determining a virtual environment by one or more processors based on sensor data received from one or more sensors, the virtual environment representing a physical environment containing a plurality of physical objects, developing a plan, based on the virtual environment, to cause a robotic manipulator to move one or more of the physical objects in the physical environment, causing the robotic manipulator to perform a first action according to the plan, receiving updated sensor data from the one or more sensors after the robotic manipulator performs the first action, modifying the virtual environment based on the updated sensor data, determining one or more modifications to the plan based on the modified virtual environment, and causing the robotic manipulator to perform a second action according to the modified plan.

29 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: ORB-SLAM as discussed by the authors is a feature-based monocular SLAM system that operates in real time, in small and large indoor and outdoor environments, with a survival of the fittest strategy that selects the points and keyframes of the reconstruction.
Abstract: This paper presents ORB-SLAM, a feature-based monocular simultaneous localization and mapping (SLAM) system that operates in real time, in small and large indoor and outdoor environments. The system is robust to severe motion clutter, allows wide baseline loop closing and relocalization, and includes full automatic initialization. Building on excellent algorithms of recent years, we designed from scratch a novel system that uses the same features for all SLAM tasks: tracking, mapping, relocalization, and loop closing. A survival of the fittest strategy that selects the points and keyframes of the reconstruction leads to excellent robustness and generates a compact and trackable map that only grows if the scene content changes, allowing lifelong operation. We present an exhaustive evaluation in 27 sequences from the most popular datasets. ORB-SLAM achieves unprecedented performance with respect to other state-of-the-art monocular SLAM approaches. For the benefit of the community, we make the source code public.

4,522 citations

Journal ArticleDOI
19 Jun 2014-PeerJ
TL;DR: The advantages of open source to achieve the goals of the scikit-image library are highlighted, and several real-world image processing applications that use scik it-image are showcased.
Abstract: scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

3,903 citations

Journal ArticleDOI
TL;DR: A survival of the fittest strategy that selects the points and keyframes of the reconstruction leads to excellent robustness and generates a compact and trackable map that only grows if the scene content changes, allowing lifelong operation.
Abstract: This paper presents ORB-SLAM, a feature-based monocular SLAM system that operates in real time, in small and large, indoor and outdoor environments. The system is robust to severe motion clutter, allows wide baseline loop closing and relocalization, and includes full automatic initialization. Building on excellent algorithms of recent years, we designed from scratch a novel system that uses the same features for all SLAM tasks: tracking, mapping, relocalization, and loop closing. A survival of the fittest strategy that selects the points and keyframes of the reconstruction leads to excellent robustness and generates a compact and trackable map that only grows if the scene content changes, allowing lifelong operation. We present an exhaustive evaluation in 27 sequences from the most popular datasets. ORB-SLAM achieves unprecedented performance with respect to other state-of-the-art monocular SLAM approaches. For the benefit of the community, we make the source code public.

3,807 citations

Journal ArticleDOI
TL;DR: ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities, is presented, being in most cases the most accurate SLAM solution.
Abstract: We present ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities. The system works in real time on standard central processing units in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end, based on bundle adjustment with monocular and stereo observations, allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches with map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields.

3,499 citations

Proceedings ArticleDOI
17 Mar 2017
TL;DR: Deformable convolutional networks as discussed by the authors augment the spatial sampling locations in the modules with additional offsets and learn the offsets from the target tasks, without additional supervision, which can readily replace their plain counterparts in existing CNNs and can be easily trained end-to-end by standard backpropagation.
Abstract: Convolutional neural networks (CNNs) are inherently limited to model geometric transformations due to the fixed geometric structures in their building modules. In this work, we introduce two new modules to enhance the transformation modeling capability of CNNs, namely, deformable convolution and deformable RoI pooling. Both are based on the idea of augmenting the spatial sampling locations in the modules with additional offsets and learning the offsets from the target tasks, without additional supervision. The new modules can readily replace their plain counterparts in existing CNNs and can be easily trained end-to-end by standard back-propagation, giving rise to deformable convolutional networks. Extensive experiments validate the performance of our approach. For the first time, we show that learning dense spatial transformation in deep CNNs is effective for sophisticated vision tasks such as object detection and semantic segmentation. The code is released at https://github.com/msracver/Deformable-ConvNets.

3,318 citations