scispace - formally typeset
Search or ask a question
Author

Rajeev Sharma

Bio: Rajeev Sharma is an academic researcher from Pennsylvania State University. The author has contributed to research in topics: Gesture & Gesture recognition. The author has an hindex of 34, co-authored 107 publications receiving 5446 citations. Previous affiliations of Rajeev Sharma include University of Illinois at Urbana–Champaign.


Papers
More filters
Proceedings Article
14 Oct 1996
TL;DR: In this paper, the authors describe the use of visual hand gesture analysis enhanced with speech recognition for developing a bimodal gesture/speech interface for controlling a 3D display.
Abstract: In recent years there has been tremendous progress in 3-D immersive display and virtual reality (VR) technologies. Scientific visualization of data is one of many applications that has benefited from this progress. To fully exploit the potential of these applications in the new environment there is a need for "natural" interfaces that allow the manipulation of such displays without burdensome attachments. This paper describes the use of visual hand gesture analysis enhanced with speech recognition for developing a bimodal gesture/speech interface for controlling a 3-D display. The interface augments an existing application, VMD, which is a VR visual computing environment for molecular biolo-gists. The free hand gestures are used for manipulating, the37D graphical display together with a set of speech commands. We concentrate on the visual gesture analysis techniques used in developing this interface. The dual modality of gesture/speech is found to greatly aid the interaction capability.

19 citations

Proceedings ArticleDOI
21 May 1995
TL;DR: A motion planning problem in this framework is formulated as the design of a stochastic optimal controller, which provides a motion command for the robot for each contingency that it could be confronted with.
Abstract: Presents a framework for analyzing and determining motion plans for a robot that operates in an environment that changes over time in an uncertain manner. The authors first classify sources of uncertainty in motion planning into four categories, and argue that the framework addressed in this paper characterizes an important, yet little-explored category. The authors treat the changing environment in a flexible manner by combining traditional configuration space concepts with a Markov process that models the environment. For this context, the authors then propose the use of a motion strategy, which provides a motion command for the robot for each contingency that it could be confronted with. The authors allow the specification of a desired performance criterion, such as time or distance, and the goal is to determine a motion strategy that is optimal with respect to that criterion. A motion planning problem in this framework is formulated as the design of a stochastic optimal controller.

19 citations

Journal ArticleDOI
TL;DR: A novel neural network, called the self-organized invertible map (SOIM), that is capable of learning many-to-one functionals mappings in a self- organized and online fashion is proposed and derived and then experimentally verified using the active vision system.
Abstract: We propose a novel neural network, called the self-organized invertible map (SOIM), that is capable of learning many-to-one functionals mappings in a self-organized and online fashion. The design and performance of the SOIM are highlighted by learning a many-to-one functional mapping that exists in active vision for spatial representation of three-dimensional point targets. The learned spatial representation is invariant to changing camera configurations. The SOIM also possesses an invertible property that can be exploited for active vision. An efficient and experimentally feasible method was devised for learning this representation on a real active vision system. The proof of convergence during learning as well as conditions for invariance of the learned spatial representation are derived and then experimentally verified using the active vision system. We also demonstrate various active vision applications that benefit from the properties of the mapping learned by SOIM.

19 citations

Proceedings ArticleDOI
24 Aug 2014
TL;DR: Neovision2 neuromorphic-vision systems' performance in detecting objects in video was measured using a set of annotated clips and a comparison with computer vision based baseline algorithms is described.
Abstract: The U.S. Defense Advanced Research Projects Agency's (DARPA) Neovision2 program aims to develop artificial vision systems based on the design principles employed by mammalian vision systems. Three such algorithms are briefly described in this paper. These neuromorphic-vision systems' performance in detecting objects in video was measured using a set of annotated clips. This paper describes the results of these evaluations including the data domains, metrics, methodologies, performance over a range of operating points and a comparison with computer vision based baseline algorithms.

18 citations

01 Jan 2001
TL;DR: This work shows how kinematic structure can be inferred from monocular views without making any a priori assumptions about the scene except that it consists of piecewise rigid segments constrained by jointed motion.
Abstract: We extract and initialize kinematic models from monocular visual data from the ground up without any manual initialization, adaptation or prior model knowledge. Visual analysis, classification andtracking of articulated motion is challenging due to the difficulties involved in separating noise and spurious variability caused by appearance, size and view point fluctuations from the task-relevant variations. By incorporating powerful domain knowledge, model based approachesare able to overcome this problem to a great extent and are actively explored by many researchers. However, model acquisition, initialization and adaptationare still relatively underinvestigated problems. In this work we show how kinematic structure can be inferred from monocular views without making any a priori assumptions about the scene except that it consists of piecewise rigid segments constrained by jointed motion. The efficacy of the method is demonstrated on synthetic as well as natural image sequences.

18 citations


Cited by
More filters
Journal ArticleDOI
Ronald Azuma1
TL;DR: The characteristics of augmented reality systems are described, including a detailed discussion of the tradeoffs between optical and video blending approaches, and current efforts to overcome these problems are summarized.
Abstract: This paper surveys the field of augmented reality AR, in which 3D virtual objects are integrated into a 3D real environment in real time. It describes the medical, manufacturing, visualization, path planning, entertainment, and military applications that have been explored. This paper describes the characteristics of augmented reality systems, including a detailed discussion of the tradeoffs between optical and video blending approaches. Registration and sensing errors are two of the biggest problems in building effective augmented reality systems, so this paper summarizes current efforts to overcome these problems. Future directions and areas requiring further research are discussed. This survey provides a starting point for anyone interested in researching or using augmented reality.

8,053 citations

MonographDOI
01 Jan 2006
TL;DR: This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms, into planning under differential constraints that arise when automating the motions of virtually any mechanical system.
Abstract: Planning algorithms are impacting technical disciplines and industries around the world, including robotics, computer-aided design, manufacturing, computer graphics, aerospace applications, drug design, and protein folding. This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms. The treatment is centered on robot motion planning but integrates material on planning in discrete spaces. A major part of the book is devoted to planning under uncertainty, including decision theory, Markov decision processes, and information spaces, which are the “configuration spaces” of all sensor-based planning problems. The last part of the book delves into planning under differential constraints that arise when automating the motions of virtually any mechanical system. Developed from courses taught by the author, the book is intended for students, engineers, and researchers in robotics, artificial intelligence, and control theory as well as computer graphics, algorithms, and computational biology.

6,340 citations

Journal ArticleDOI
01 Oct 1996
TL;DR: This article provides a tutorial introduction to visual servo control of robotic manipulators by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process.
Abstract: This article provides a tutorial introduction to visual servo control of robotic manipulators. Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework. We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process. We then present a taxonomy of visual servo control systems. The two major classes of systems, position-based and image-based systems, are then discussed in detail. Since any visual servo system must be capable of tracking image features in a sequence of images, we also include an overview of feature-based and correlation-based methods for tracking. We conclude the tutorial with a number of observations on the current directions of the research field of visual servo control.

3,619 citations

Book
01 Jan 2006
TL;DR: In this paper, the Jacobian is used to describe the relationship between rigid motions and homogeneous transformations, and a linear algebraic approach is proposed for vision-based control of dynamical systems.
Abstract: Preface. 1. Introduction. 2. Rigid Motions and Homogeneous Transformations. 3. Forward and Inverse Kinematics. 4. Velocity Kinematics-The Jacobian. 5. Path and Trajectory Planning. 6. Independent Joint Control. 7. Dynamics. 8. Multivariable Control. 9. Force Control. 10. Geometric Nonlinear Control. 11. Computer Vision. 12. Vision-Based Control. Appendix A: Trigonometry. Appendix B: Linear Algebra. Appendix C: Dynamical Systems. Appendix D: Lyapunov Stability. Index.

3,100 citations

Journal ArticleDOI
TL;DR: The context for socially interactive robots is discussed, emphasizing the relationship to other research fields and the different forms of “social robots”, and a taxonomy of design methods and system components used to build socially interactive Robots is presented.

2,869 citations