scispace - formally typeset
Search or ask a question
Author

Gautham P. Das

Bio: Gautham P. Das is an academic researcher from University of Lincoln. The author has contributed to research in topics: Retinal ganglion & Robot. The author has an hindex of 9, co-authored 30 publications receiving 423 citations. Previous affiliations of Gautham P. Das include Ulster University & Amrita Vishwa Vidyapeetham.

Papers
More filters
Proceedings ArticleDOI
13 Jun 2016
TL;DR: Although the proposed approach discards the precise DAVIS event timing, it offers the significant advantage of compatibility with conventional deep learning technology without giving up the advantage of data-driven computing.
Abstract: This paper describes the application of a Convolutional Neural Network (CNN) in the context of a predator/prey scenario. The CNN is trained and run on data from a Dynamic and Active Pixel Sensor (DAVIS) mounted on a Summit XL robot (the predator), which follows another one (the prey). The CNN is driven by both conventional image frames and dynamic vision sensor “frames” that consist of a constant number of DAVIS ON and OFF events. The network is thus “data driven” at a sample rate proportional to the scene activity, so the effective sample rate varies from 15 Hz to 240 Hz depending on the robot speeds. The network generates four outputs: steer right, left, center and non-visible. After off-line training on labeled data, the network is imported on the on-board Summit XL robot which runs jAER and receives steering directions in real time. Successful results on closed-loop trials, with accuracies up to 87% or 92% (depending on evaluation criteria) are reported. Although the proposed approach discards the precise DAVIS event timing, it offers the significant advantage of compatibility with conventional deep learning technology without giving up the advantage of data-driven computing.

92 citations

Proceedings ArticleDOI
22 May 2016
TL;DR: This paper reports an object tracking algorithm for a moving platform using the dynamic and active-pixel vision sensor (DAVIS) that takes advantage of both the active pixel sensor (APS) frame and dynamic vision sensor event outputs from the DAVIS.
Abstract: This paper reports an object tracking algorithm for a moving platform using the dynamic and active-pixel vision sensor (DAVIS). It takes advantage of both the active pixel sensor (APS) frame and dynamic vision sensor (DVS) event outputs from the DAVIS. The tracking is performed in a three step-manner: regions of interest (ROIs) are generated by a cluster-based tracking using the DVS output, likely target locations are detected by using a convolutional neural network (CNN) on the APS output to classify the ROIs as foreground and background, and finally a particle filter infers the target location from the ROIs. Doing convolution only in the ROIs boosts the speed by a factor of 70 compared with full-frame convolutions for the 240×180 frame input from the DAVIS. The tracking accuracy on a predator and prey robot database reaches 90% with a cost of less than 20ms/frame in Matlab on a normal PC without using a GPU.

78 citations

Posted Content
TL;DR: In this paper, a convolutional neural network (CNN) was applied to steer right, left, center and non-visible in a predator/prey scenario using a Dynamic and Active Pixel Sensor (DAVIS) mounted on a robot.
Abstract: This paper describes the application of a Convolutional Neural Network (CNN) in the context of a predator/prey scenario. The CNN is trained and run on data from a Dynamic and Active Pixel Sensor (DAVIS) mounted on a Summit XL robot (the predator), which follows another one (the prey). The CNN is driven by both conventional image frames and dynamic vision sensor "frames" that consist of a constant number of DAVIS ON and OFF events. The network is thus "data driven" at a sample rate proportional to the scene activity, so the effective sample rate varies from 15 Hz to 240 Hz depending on the robot speeds. The network generates four outputs: steer right, left, center and non-visible. After off-line training on labeled data, the network is imported on the on-board Summit XL robot which runs jAER and receives steering directions in real time. Successful results on closed-loop trials, with accuracies up to 87% or 92% (depending on evaluation criteria) are reported. Although the proposed approach discards the precise DAVIS event timing, it offers the significant advantage of compatibility with conventional deep learning technology without giving up the advantage of data-driven computing.

72 citations

Journal ArticleDOI
TL;DR: The proposed Consensus Based Parallel Auction and Execution (CBPAE), a distributed algorithm for task allocation in a system of multiple heterogeneous autonomous robots deployed in a healthcare facility, based on auction and consensus principles, is suitable for highly dynamic real world environments.
Abstract: Various ambient assisted living (AAL) technologies have been proposed for improving the living conditions of elderly people. One of them is to introduce robots to reduce dependency on support staff. The tasks commonly encountered in a healthcare facility such as a care home for elderly people are heterogeneous and are of different priorities. A care home environment is also dynamic and new emergency priority tasks, which if not attended shortly may result in fatal situations, may randomly appear. Therefore, it is better to use a multi-robot system (MRS) consisting of heterogeneous robots than designing a single robot capable of doing all tasks. An efficient task allocation algorithm capable of handling the dynamic nature of the environment, the heterogeneity of robots and tasks, and the prioritisation of tasks is required to reap the benefits of introducing an MRS. This paper proposes Consensus Based Parallel Auction and Execution (CBPAE), a distributed algorithm for task allocation in a system of multiple heterogeneous autonomous robots deployed in a healthcare facility, based on auction and consensus principles. Unlike many of the existing market based task allocation algorithms, which use a time extended allocation of tasks before the actual execution is initialised, the proposed algorithm uses a parallel auction and execution framework, and is thus suitable for highly dynamic real world environments. The robots continuously resolve any conflicts in the bids on tasks using inter-robot communication and a consensus process in each robot before a task is assigned to a robot. We demonstrate the effectiveness of the CBPAE by comparing its simulation results with those of an existing market based distributed multi-robot task allocation algorithm and through experiments on real robots.

68 citations

Proceedings ArticleDOI
21 Apr 2008
TL;DR: In this article, a new approach of particle swarm optimization (PSO) algorithm for short term hydro thermal scheduling (HTS) problems is presented, which is ideally suitable for hydro-thermal co-ordination problems, hydro economic dispatch problems with unit commitment, thermal economic dispatch with unit-commitment problems and scheduling of hydraulically coupled plants.
Abstract: This paper presents a new approach of particle swarm optimization (PSO) algorithm for short term hydro thermal scheduling (HTS) problems. Various possible particle selections have been studied and its effects on the global optima have been discussed. The effectiveness and stochastic nature of proposed algorithm has been tested with standard test case and the results have been compared with earlier works. This paper also describes software developed for short term hydro-thermal scheduling by considering hydro economic dispatch and thermal unit commitment. The proposed algorithm is ideally suitable for hydro-thermal co-ordination problems, hydro economic dispatch problems with unit commitment, thermal economic dispatch with unit commitment problems and scheduling of hydraulically coupled plants.

48 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras.
Abstract: Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of is), very high dynamic range (140dB vs. 60dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world.

697 citations

안태천, 노석범, 황국연, 王繼紅, 김용수 
01 Oct 2015
TL;DR: In this article, the Extreme Learning Machine (ELM) was used to train a classifier for learning to solve problems in the real world, and the results showed that the classifier achieved good performance.
Abstract: 본 논문에서는 인공 신경망의 일종인 Extreme Learning Machine의 학습 알고리즘을 기반으로 하여 노이즈에 강한 특성을 보이는 퍼지 집합 이론을 이용한 새로운 패턴 분류기를 제안 한다. 기존 인공 신경망에 비해 학습속도가 매우 빠르며, 모델의 일반화 성능이 우수하다고 알려진 Extreme Learning Machine의 학습 알고리즘을 퍼지 패턴 분류기에 적용하여 퍼지 패턴 분류기의 학습 속도와 패턴 분류 일반화 성능을 개선 한다. 제안된 퍼지 패턴 분류기의 학습 속도와 일반화 성능을 평가하기 위하여, 다양한 머신 러닝 데이터 집합을 사용한다.

548 citations

Proceedings ArticleDOI
18 Jun 2018
TL;DR: A deep neural network approach is presented that unlocks the potential of event cameras on a challenging motion-estimation task: prediction of a vehicle's steering angle, and outperforms state-of-the-art algorithms based on standard cameras.
Abstract: Event cameras are bio-inspired vision sensors that naturally capture the dynamics of a scene, filtering out redundant information. This paper presents a deep neural network approach that unlocks the potential of event cameras on a challenging motion-estimation task: prediction of a vehicle's steering angle. To make the best out of this sensor-algorithm combination, we adapt state-of-the-art convolutional architectures to the output of event sensors and extensively evaluate the performance of our approach on a publicly available large scale event-camera dataset (~1000 km). We present qualitative and quantitative explanations of why event cameras allow robust steering prediction even in cases where traditional cameras fail, e.g. challenging illumination conditions and fast motion. Finally, we demonstrate the advantages of leveraging transfer learning from traditional to event-based vision, and show that our approach outperforms state-of-the-art algorithms based on standard cameras.

344 citations

Proceedings ArticleDOI
21 Mar 2018
TL;DR: In this article, the authors introduce a novel event-based feature representation together with a new machine learning architecture, which uses local memory units to efficiently leverage past temporal information and build a robust eventbased representation.
Abstract: Event-based cameras have recently drawn the attention of the Computer Vision community thanks to their advantages in terms of high temporal resolution, low power consumption and high dynamic range, compared to traditional frame-based cameras. These properties make event-based cameras an ideal choice for autonomous vehicles, robot navigation or UAV vision, among others. However, the accuracy of event-based object classification algorithms, which is of crucial importance for any reliable system working in real-world conditions, is still far behind their frame-based counterparts. Two main reasons for this performance gap are: 1. The lack of effective low-level representations and architectures for event-based object classification and 2. The absence of large real-world event-based datasets. In this paper we address both problems. First, we introduce a novel event-based feature representation together with a new machine learning architecture. Compared to previous approaches, we use local memory units to efficiently leverage past temporal information and build a robust event-based representation. Second, we release the first large real-world event-based dataset for object classification. We compare our method to the state-of-the-art with extensive experiments, showing better classification performance and real-time computation.

297 citations