scispace - formally typeset
Search or ask a question
Author

Rajeev Sharma

Bio: Rajeev Sharma is an academic researcher from Pennsylvania State University. The author has contributed to research in topics: Gesture & Gesture recognition. The author has an hindex of 34, co-authored 107 publications receiving 5446 citations. Previous affiliations of Rajeev Sharma include University of Illinois at Urbana–Champaign.


Papers
More filters
Journal ArticleDOI
01 Sep 1993
TL;DR: A probabilistic analysis of the expected times for the dynamic paths generated when the alarms follow a Poisson distribution with parameter lambda for two alternate static paths is given.
Abstract: The problem of efficient path planning for a point robot in a partially known dynamic environment is considered. The static known part of the environment consists of point shelters distributed in planar terrain, and the dynamic, unknown part is abstracted in the form of alarms that cause the robot to leave its current (preplanned) path and divert to the nearest shelter. We give a probabilistic analysis of the expected times for the dynamic paths generated when the alarms follow a Poisson distribution with parameter lambda . A case study with three shelters serves to illustrate the dependence of the expected travel times on lambda for two alternate static paths. Two different strategies are presented for the general case of n shelters and shown to be superior for different ranges of values of the alarm rate lambda (very low and very high values, respectively). We also discuss some ways of generalizing the approach and possible applications. >

11 citations

Journal Article
TL;DR: In this paper, the authors propose a method for real-time augmentation of real videos with 2D and 3D objects by addressing the occlusion issue in an unique fashion.
Abstract: Developing a seamless merging of real and virtual image streams and 3D models is an active research topic in augmented reality (AR). We propose a method for real-time augmentation of real videos with 2D and 3D objects by addressing the occlusion issue in an unique fashion. For virtual planar objects (such as images), the 2D overlay is automatically overlaid in a planar region selected by the user in the video. The overlay is robust to arbitrary camera motion. Furthermore, a unique background-foreground segmentation algorithm renders this augmented overlay as part of the background if it coincides with foreground objects in the video stream, giving the impression that it is occluded by foreground objects. The proposed technique does not require multiple cameras, camera calibration, use of fiducials, or a structural model of the scene to work. Extending the work further, we propose a novel method of augmentation by using trifocal tensors to augment 3D objects in 3D scenes to similar effect and implement it in real time as a proof of concept. We show several results of the successful working of our algorithm in real-life situations. The technique works on a real-time video from a USB camera, Creative Webcam III, onaPIV1.6GHz system without any special hardware support.

10 citations

Journal ArticleDOI
TL;DR: An efficient Vector Associative Map (VAM)-based learning scheme is proposed to learn a joint-based representation of 3D targets that is invariant to changing camera configurations for a robotic active vision system.

10 citations

Proceedings ArticleDOI
14 Mar 1998
TL;DR: An interactive evaluation tool is developed, which uses augmentation schemes for visualizing and evaluating assembly sequences and guides the user step-by-step through an assembly sequence to help evaluate the feasibility and efficiency of a particular sequence to assemble a mechanical object from its components.
Abstract: Summary form only given. Augmented reality (AR) provides an intuitive interface to enhance the user's understanding of a scene. We consider the problem of scene augmentation in the context of assembly of a mechanical object. Concepts from robot assembly planning are used to develop a systematic framework for presenting augmentation stimuli for this assembly domain. An interactive evaluation tool is developed, which uses augmentation schemes for visualizing and evaluating assembly sequences. This system also guides the user step-by-step through an assembly sequence. Computer vision provides the sensing mechanism necessary to interpret the assembly scene. The goal of this system is to help evaluate the feasibility and efficiency of a particular sequence to assemble a mechanical object from its components. This is done by guiding the operator through each step in the sequence. The augmentation is provided with the help of a see-through head-mounted display that superimposes 3D graphics over the assembly scene and on nearby computer monitors. We incorporate these ideas into the design of an integrated system that we call AREAS (Augmented Reality System for Evaluating Assembly Sequences) and explore its use for evaluating assembly sequences using the concept of mixed prototyping.

10 citations

Proceedings ArticleDOI
07 Aug 1997
TL;DR: In this article, the authors describe an interactive tool for evaluating assembly sequences using the human-computer interface of augmented reality, which allows a manufacturing engineer to interact with the assembly planner while manipulating the real and virtual prototype components.
Abstract: This paper describes an interactive tool for evaluating assembly sequences using the novel human-computer interface of augmented reality. The goal is to be able to consider various sequencing alternatives at an early stage of the manufacturing design process by manipulating both virtual and real prototype components. Thus the mixed prototyping can enable a better intuition of the different constraints and factors involved in assembly design and evaluation. The assembly evaluation tool is based on two implemented systems for computer vision-based augmented reality and for assembly visualization. These existing systems are integrated into the current design, and would allow a manufacturing engineer to interact with the assembly planner while manipulating the real and virtual prototype components. Information from the assembly planner can be displayed directly superimposed on the real scene using a see-through head-mounted display as well as adjacent computer monitors. The current status of the implementation and the plans for future extensions are outlined.

9 citations


Cited by
More filters
Journal ArticleDOI
Ronald Azuma1
TL;DR: The characteristics of augmented reality systems are described, including a detailed discussion of the tradeoffs between optical and video blending approaches, and current efforts to overcome these problems are summarized.
Abstract: This paper surveys the field of augmented reality AR, in which 3D virtual objects are integrated into a 3D real environment in real time. It describes the medical, manufacturing, visualization, path planning, entertainment, and military applications that have been explored. This paper describes the characteristics of augmented reality systems, including a detailed discussion of the tradeoffs between optical and video blending approaches. Registration and sensing errors are two of the biggest problems in building effective augmented reality systems, so this paper summarizes current efforts to overcome these problems. Future directions and areas requiring further research are discussed. This survey provides a starting point for anyone interested in researching or using augmented reality.

8,053 citations

MonographDOI
01 Jan 2006
TL;DR: This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms, into planning under differential constraints that arise when automating the motions of virtually any mechanical system.
Abstract: Planning algorithms are impacting technical disciplines and industries around the world, including robotics, computer-aided design, manufacturing, computer graphics, aerospace applications, drug design, and protein folding. This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms. The treatment is centered on robot motion planning but integrates material on planning in discrete spaces. A major part of the book is devoted to planning under uncertainty, including decision theory, Markov decision processes, and information spaces, which are the “configuration spaces” of all sensor-based planning problems. The last part of the book delves into planning under differential constraints that arise when automating the motions of virtually any mechanical system. Developed from courses taught by the author, the book is intended for students, engineers, and researchers in robotics, artificial intelligence, and control theory as well as computer graphics, algorithms, and computational biology.

6,340 citations

Journal ArticleDOI
01 Oct 1996
TL;DR: This article provides a tutorial introduction to visual servo control of robotic manipulators by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process.
Abstract: This article provides a tutorial introduction to visual servo control of robotic manipulators. Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework. We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process. We then present a taxonomy of visual servo control systems. The two major classes of systems, position-based and image-based systems, are then discussed in detail. Since any visual servo system must be capable of tracking image features in a sequence of images, we also include an overview of feature-based and correlation-based methods for tracking. We conclude the tutorial with a number of observations on the current directions of the research field of visual servo control.

3,619 citations

Book
01 Jan 2006
TL;DR: In this paper, the Jacobian is used to describe the relationship between rigid motions and homogeneous transformations, and a linear algebraic approach is proposed for vision-based control of dynamical systems.
Abstract: Preface. 1. Introduction. 2. Rigid Motions and Homogeneous Transformations. 3. Forward and Inverse Kinematics. 4. Velocity Kinematics-The Jacobian. 5. Path and Trajectory Planning. 6. Independent Joint Control. 7. Dynamics. 8. Multivariable Control. 9. Force Control. 10. Geometric Nonlinear Control. 11. Computer Vision. 12. Vision-Based Control. Appendix A: Trigonometry. Appendix B: Linear Algebra. Appendix C: Dynamical Systems. Appendix D: Lyapunov Stability. Index.

3,100 citations

Journal ArticleDOI
TL;DR: The context for socially interactive robots is discussed, emphasizing the relationship to other research fields and the different forms of “social robots”, and a taxonomy of design methods and system components used to build socially interactive Robots is presented.

2,869 citations