scispace - formally typeset
D

Daniel S. Simpkins

Researcher at Wilmington University

Publications -  35
Citations -  2474

Daniel S. Simpkins is an academic researcher from Wilmington University. The author has contributed to research in topics: Pointing device & User interface. The author has an hindex of 19, co-authored 35 publications receiving 2474 citations.

Papers
More filters
Patent

A control framework with a zoomable graphical user interface for organizing, selecting and launching media items

TL;DR: In this paper, a control framework for organizing, selecting and launching media items including graphical user interfaces coupled with an optional free space control device for collection of the basic primitives of point, click, scroll, hover and zoom, which permit for easy and rapid selection of media items, e.g., movies, songs etc., from larger or small collections.
Patent

Methods and devices for identifying users based on tremor

TL;DR: In this paper, a 3D pointing device using hand tremor as an input is presented, where one or more sensors within the handheld device can detect a user's tremor and identify the user based on the detected tremor.
Patent

User interface devices and methods employing accelerometers

TL;DR: In this article, a free space pointing device with a plurality of accelerometers (i.e., the accelerometer, the gyroscope, and the magnetometer) is described.
Patent

Systems and methods for resolution consistent semantic zooming

TL;DR: In this article, the authors present a control framework for organizing, selecting and launching media items including graphical user interfaces coupled with an optional free space control device for collection of the basic control primitives of point, click, scroll, hover and zoom which permit for easy and rapid selection of media items, e.g., movies, songs etc., from large or small collections.
Patent

3D pointing devices and methods

TL;DR: In this paper, a 3D pointing device, which uses at least one sensor to detect motion of the handheld device, is presented, and the detected motion can then be mapped into a desired output, e.g., cursor movement.