scispace - formally typeset
E

Eric Martinson

Researcher at Toyota

Publications -  58
Citations -  960

Eric Martinson is an academic researcher from Toyota. The author has contributed to research in topics: Mobile robot & Robot. The author has an hindex of 18, co-authored 58 publications receiving 905 citations. Previous affiliations of Eric Martinson include Georgia Institute of Technology & General Motors.

Papers
More filters
Patent

Augmenting Layer-Based Object Detection With Deep Convolutional Neural Networks

TL;DR: In this article, a deep convolutional neural network (DCNN) was proposed to determine a class of at least a portion of the image data based on the first likelihood score and the second likelihood score.
Posted Content

Driver Action Prediction Using Deep (Bidirectional) Recurrent Neural Network.

TL;DR: The proposed driver action prediction system incorporates camera-based knowledge of the driving environment and the driver themselves, in addition to traditional vehicle dynamics, and uses a deep bidirectional recurrent neural network to learn the correlation between sensory inputs and impending driver behavior achieving accurate and high horizon action prediction.
Patent

Method and system for training a robot using human-assisted task demonstration

TL;DR: In this article, a method for training a robot to execute a robotic task in a work environment includes moving the robot across its configuration space through multiple states of the task and recording motor schema describing a sequence of behavior of the robot.
Journal ArticleDOI

Vibrotactile Guidance for Wayfinding of Blind Walkers

TL;DR: This interface enables blind walkers to receive haptic directional instructions along complex paths without negatively impacting users' ability to listen and/or perceive the environment the way some auditory directional instructions do.
Proceedings ArticleDOI

Using vision, acoustics, and natural language for disambiguation

TL;DR: This paper will describe the system and integration of the various modules prior to future testing, using visual, acoustic, and linguistic inputs in various combinations to solve such problems as the disambiguation of referents, localization of human speakers, and determination of the source of utterances and appropriateness of responses when humans and robots interact.