scispace - formally typeset
Search or ask a question
Author

Shital Shah

Bio: Shital Shah is an academic researcher from Microsoft. The author has contributed to research in topics: Medicine & Computer science. The author has an hindex of 8, co-authored 17 publications receiving 1439 citations.

Papers
More filters
Posted Content
TL;DR: A new simulator built on Unreal Engine that offers physically and visually realistic simulations for autonomous vehicles in real world and that is designed from the ground up to be extensible to accommodate new types of vehicles, hardware platforms and software protocols.
Abstract: Developing and testing algorithms for autonomous vehicles in real world is an expensive and time consuming process. Also, in order to utilize recent advances in machine intelligence and deep learning we need to collect a large amount of annotated training data in a variety of conditions and environments. We present a new simulator built on Unreal Engine that offers physically and visually realistic simulations for both of these goals. Our simulator includes a physics engine that can operate at a high frequency for real-time hardware-in-the-loop (HITL) simulations with support for popular protocols (e.g. MavLink). The simulator is designed from the ground up to be extensible to accommodate new types of vehicles, hardware platforms and software protocols. In addition, the modular design enables various components to be easily usable independently in other projects. We demonstrate the simulator by first implementing a quadrotor as an autonomous vehicle and then experimentally comparing the software components with real-world flights.

979 citations

Book ChapterDOI
15 May 2017
TL;DR: In this paper, the authors present a new simulator built on Unreal Engine that offers physically and visually realistic simulations for autonomous vehicles in real-world environments, including a physics engine that can operate at a high frequency for real-time hardware-in-the-loop (HITL) simulations with support for popular protocols (e.g., MavLink).
Abstract: Developing and testing algorithms for autonomous vehicles in real world is an expensive and time consuming process Also, in order to utilize recent advances in machine intelligence and deep learning we need to collect a large amount of annotated training data in a variety of conditions and environments We present a new simulator built on Unreal Engine that offers physically and visually realistic simulations for both of these goals Our simulator includes a physics engine that can operate at a high frequency for real-time hardware-in-the-loop (HITL) simulations with support for popular protocols (eg MavLink) The simulator is designed from the ground up to be extensible to accommodate new types of vehicles, hardware platforms and software protocols In addition, the modular design enables various components to be easily usable independently in other projects We demonstrate the simulator by first implementing a quadrotor as an autonomous vehicle and then experimentally comparing the software components with real-world flights

938 citations

Proceedings ArticleDOI
01 Oct 2017
TL;DR: In this article, the authors present an automatic method to generate drone trajectories, such that the imagery acquired during the flight will later produce a high-fidelity 3D model.
Abstract: Drones equipped with cameras are emerging as a powerful tool for large-scale aerial 3D scanning, but existing automatic flight planners do not exploit all available information about the scene, and can therefore produce inaccurate and incomplete 3D models. We present an automatic method to generate drone trajectories, such that the imagery acquired during the flight will later produce a high-fidelity 3D model. Our method uses a coarse estimate of the scene geometry to plan camera trajectories that: (1) cover the scene as thoroughly as possible; (2) encourage observations of scene geometry from a diverse set of viewing angles; (3) avoid obstacles; and (4) respect a user-specified flight time budget. Our method relies on a mathematical model of scene coverage that exhibits an intuitive diminishing returns property known as submodularity. We leverage this property extensively to design a trajectory planning algorithm that reasons globally about the non-additive coverage reward obtained across a trajectory, jointly with the cost of traveling between views. We evaluate our method by using it to scan three large outdoor scenes, and we perform a quantitative evaluation using a photorealistic video game simulator.

104 citations

Patent
Shital Shah1
24 Sep 2010
TL;DR: In this paper, a client-specific or client-independent polling interval is provided to each client dynamically based on server load, which can be determined using a feedback control system or using a Bucket Reservation Method.
Abstract: A client-specific or client-independent polling interval is provided to each client dynamically based on server load. The polling interval can be provided in the server polling response sent in response to a client polling request. The polling interval can be determined using a feedback control system or using a Bucket Reservation Method. The server uses a next polling interval and a flag that indicates if the previous polling request was ignored. Using these two parameters the server can continuously control the polling frequency from the client to achieve optimal performance.

69 citations

Proceedings ArticleDOI
TL;DR: AirSim-W, a simulation environment that has been designed specifically for the domain of wildlife conservation, is presented, which includes creation of an African savanna environment in Unreal Engine, integration of a new thermal infrared model based on radiometry, and demonstrated detection improvement using simulated data generated by AirSim- W.
Abstract: Increases in poaching levels have led to the use of unmanned aerial vehicles (UAVs or drones) to count animals, locate animals in parks, and even find poachers. Finding poachers is often done at night through the use of long wave thermal infrared cameras mounted on these UAVs. Unfortunately, monitoring the live video stream from the conservation UAVs all night is an arduous task. In order to assist in this monitoring task, new techniques in computer vision have been developed. This work is based on a dataset which took approximately six months to label. However, further improvement in detection and future testing of autonomous flight require not only more labeled training data, but also an environment where algorithms can be safely tested. In order to meet both goals efficiently, we present AirSim-W, a simulation environment that has been designed specifically for the domain of wildlife conservation. This includes (i) creation of an African savanna environment in Unreal Engine, (ii) integration of a new thermal infrared model based on radiometry, (iii) API code expansions to follow objects of interest or fly in zig-zag patterns to generate simulated training data, and (iv) demonstrated detection improvement using simulated data generated by AirSim-W. With these additional simulation features, AirSim-W will be directly useful for wildlife conservation research.

49 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: An extensive review of deep learning-based self-supervised general visual feature learning methods from images or videos as a subset of unsupervised learning methods to learn general image and video features from large-scale unlabeled data without using any human-annotated labels is provided.
Abstract: Large-scale labeled data are generally required to train deep neural networks in order to obtain better performance in visual feature learning from images or videos for computer vision applications. To avoid extensive cost of collecting and annotating large-scale datasets, as a subset of unsupervised learning methods, self-supervised learning methods are proposed to learn general image and video features from large-scale unlabeled data without using any human-annotated labels. This paper provides an extensive review of deep learning-based self-supervised general visual feature learning methods from images or videos. First, the motivation, general pipeline, and terminologies of this field are described. Then the common deep neural network architectures that used for self-supervised learning are summarized. Next, the schema and evaluation metrics of self-supervised learning methods are reviewed followed by the commonly used datasets for images, videos, audios, and 3D data, as well as the existing self-supervised visual feature learning methods. Finally, quantitative performance comparisons of the reviewed methods on benchmark datasets are summarized and discussed for both image and video feature learning. At last, this paper is concluded and lists a set of promising future directions for self-supervised visual feature learning.

876 citations

Journal ArticleDOI
TL;DR: This review summarises deep reinforcement learning algorithms, provides a taxonomy of automated driving tasks where (D)RL methods have been employed, highlights the key challenges algorithmically as well as in terms of deployment of real world autonomous driving agents, the role of simulators in training agents, and finally methods to evaluate, test and robustifying existing solutions in RL and imitation learning.
Abstract: With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. This review summarises deep reinforcement learning (DRL) algorithms and provides a taxonomy of automated driving tasks where (D)RL methods have been employed, while addressing key computational challenges in real world deployment of autonomous driving agents. It also delineates adjacent domains such as behavior cloning, imitation learning, inverse reinforcement learning that are related but are not classical RL algorithms. The role of simulators in training agents, methods to validate, test and robustify existing solutions in RL are discussed.

740 citations

Proceedings ArticleDOI
18 Jun 2018
TL;DR: Gibson as discussed by the authors is a real-world environment for active agents to learn visual perception tasks in real-time and is based upon virtualizing real spaces, rather than artificially designed ones, and currently includes over 1400 floor spaces from 572 full buildings.
Abstract: Developing visual perception models for active agents and sensorimotor control in the physical world are cumbersome as existing algorithms are too slow to efficiently learn in real-time and robots are fragile and costly. This has given rise to learning-in-simulation which consequently casts a question on whether the results transfer to real-world. In this paper, we investigate developing real-world perception for active agents, propose Gibson Environment for this purpose, and showcase a set of perceptual tasks learned therein. Gibson is based upon virtualizing real spaces, rather than artificially designed ones, and currently includes over 1400 floor spaces from 572 full buildings. The main characteristics of Gibson are: I. being from the real-world and reflecting its semantic complexity, II. having an internal synthesis mechanism "Goggles" enabling deploying the trained models in real-world without needing domain adaptation, III. embodiment of agents and making them subject to constraints of physics and space.

638 citations

Posted Content
TL;DR: Self-Supervised Learning: Self-supervised learning as discussed by the authors is a subset of unsupervised image and video feature learning, which aims to learn general image features from large-scale unlabeled data without using any human-annotated labels.
Abstract: Large-scale labeled data are generally required to train deep neural networks in order to obtain better performance in visual feature learning from images or videos for computer vision applications. To avoid extensive cost of collecting and annotating large-scale datasets, as a subset of unsupervised learning methods, self-supervised learning methods are proposed to learn general image and video features from large-scale unlabeled data without using any human-annotated labels. This paper provides an extensive review of deep learning-based self-supervised general visual feature learning methods from images or videos. First, the motivation, general pipeline, and terminologies of this field are described. Then the common deep neural network architectures that used for self-supervised learning are summarized. Next, the main components and evaluation metrics of self-supervised learning methods are reviewed followed by the commonly used image and video datasets and the existing self-supervised visual feature learning methods. Finally, quantitative performance comparisons of the reviewed methods on benchmark datasets are summarized and discussed for both image and video feature learning. At last, this paper is concluded and lists a set of promising future directions for self-supervised visual feature learning.

595 citations