scispace - formally typeset
K

Kester Duncan

Researcher at University of South Florida

Publications -  10
Citations -  140

Kester Duncan is an academic researcher from University of South Florida. The author has contributed to research in topics: Sign (mathematics) & Kadir–Brady saliency detector. The author has an hindex of 4, co-authored 10 publications receiving 131 citations.

Papers
More filters
Proceedings ArticleDOI

Multi-scale superquadric fitting for efficient shape and pose recovery of unknown objects

TL;DR: This work proposes a low latency multi-scale voxelization strategy that rapidly fits superquadrics to single view 3D point clouds and is able to quickly and accurately estimate the shape and pose parameters of relevant objects in a scene.
Journal ArticleDOI

Saliency in images and video: a brief survey

TL;DR: The role and advancement of saliency algorithms over the past decade are surveyed, with an outline of the datasets and performance measures utilised as well as the computational techniques pervasive in the literature.
Book ChapterDOI

Finding recurrent patterns from continuous sign language sentences for automated extraction of signs

TL;DR: In this paper, a probabilistic framework is presented to automatically learn recurring signs from multiple sign language video sequences containing the vocabulary of interest, which is robust to the variations produced by adjacent signs.
Proceedings ArticleDOI

Relational entropy-based saliency detection in images and videos

TL;DR: This paper employs an efficient technique for calculating the Rényi entropy of the probabilistic relational distributions using Parzen window weighted samples, thus eliminating the need for constructing intermediate histogram representations.
Book ChapterDOI

Scene-Dependent Intention Recognition for Task Communication with Reduced Human-Robot Interaction

TL;DR: This work proposes an intention recognition framework based on a Markov model formulation entitled Object-Action Intention Networks that is appropriate for persons with limited physical capabilities and achieves approximately 81% reduction in interactions overall after learning, when compared to other intention recognition approaches.