scispace - formally typeset
Search or ask a question

Showing papers by "Rajeev Sharma published in 1997"


Journal ArticleDOI
TL;DR: A fraction of the recycle slurry is treated with sulphuric acid to convert at least some of the gypsum to calcium sulphate hemihydrate and the slurry comprising hemihYDrate is returned to contact the mixture of phosphate rock, phosphoric acid and recycle Gypsum slurry.
Abstract: The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). In particular, visual interpretation of hand gestures can help in achieving the ease and naturalness desired for HCI. This has motivated a very active research area concerned with computer vision-based analysis and interpretation of hand gestures. We survey the literature on visual interpretation of hand gestures in the context of its role in HCI. This discussion is organized on the basis of the method used for modeling, analyzing, and recognizing gestures. Important differences in the gesture interpretation approaches arise depending on whether a 3D model of the human hand or an image appearance model of the human hand is used. 3D hand models offer a way of more elaborate modeling of hand gestures but lead to computational hurdles that have not been overcome given the real-time requirements of HCI. Appearance-based models lead to computationally efficient "purposive" approaches that work well under constrained situations but seem to lack the generality desirable for HCI. We also discuss implemented gestural systems as well as other potential applications of vision-based gesture recognition. Although the current progress is encouraging, further theoretical as well as computational advances are needed before gestures can be widely used for HCI. We discuss directions of future research in gesture recognition, including its integration with other natural modes of human-computer interaction.

1,973 citations


Journal ArticleDOI
01 Aug 1997
TL;DR: A quantitative measure of motion perceptibility is derived, which relates the magnitude of the rate of change in an object's position to the magnitude in the image of that object, and is combined with the traditional notion of manipulability, into a composite perceptibility/manipulability measure.
Abstract: We address the ability of a computer vision system to perceive the motion of an object (possibly a robot manipulator) in its field of view. We derive a quantitative measure of motion perceptibility, which relates the magnitude of the rate of change in an object's position to the magnitude of the rate of change in the image of that object. We then show how motion perceptibility can be combined with the traditional notion of manipulability, into a composite perceptibility/manipulability measure. We demonstrate how this composite measure may be applied to a number of different problems involving relative hand/eye positioning and control.

80 citations


Journal ArticleDOI
TL;DR: A framework for analyzing and computing motion plans for a robot that operates in an environment that both varies over time and is not completely predictable is presented, which leads to a dynamic programming- based algorithm for determining optimal strategies.
Abstract: We present a framework for analyzing and computing motion plans for a robot that operates in an environment that both varies over time and is not completely predictable We first classify sources of motion-planning uncertainty into four cat egories, and argue that the problems addressed in this article belong to a fundamental category that has received little atten tion We treat the changing environment in a flexible manner by combining traditional configuration-space concepts with a Markov process that models the environment For this context, we then propose the use of a motion strategy, which provides a motion command for the robot for each contingency that it might confront We allow the specification of a desired per formance criterion, such as time or distance, and determine a motion strategy that is optimal with respect to that criterion We demonstrate the breadth of our framework by applying it to a variety of motion-planning problems Examples are com puted for problems that involve a changing

67 citations


Journal ArticleDOI
TL;DR: An experimental prototype system, VEGAS (Visual Enhancement for Guiding Assembly Sequences), is described, that implements some of the AR concepts for guiding assembly using computer vision, and concepts from robot assembly planning are used to develop a systematic framework for presenting augmentation stimuli for this assembly domain.
Abstract: Augmented reality AR has the goal of enhancing a person's perception of the surrounding world, unlike virtual reality VR that aims at replacing the perception of the world with an artificial one. An important issue in AR is making the virtual world sensitive to the current state of the surrounding real world as the user interacts with it. For providing the appropriate augmentation stimulus at the right position and time, the system needs some sensor to interpret the surrounding scene. Computer vision holds great potential in providing the necessary interpretation of the scene. While a computer vision-based general interpretation of a scene is extremely difficult, the constraints from the assembly domain and specific marker-based coding scheme are used to develop an efficient and practical solution. We consider the problem of scene augmentation in the context of a human engaged in assembling a mechanical object from its components. Concepts from robot assembly planning are used to develop a systematic framework for presenting augmentation stimuli for this assembly domain. An experimental prototype system, VEGAS Visual Enhancement for Guiding Assembly Sequences, is described, that implements some of the AR concepts for guiding assembly using computer vision.

55 citations


Journal ArticleDOI
01 Feb 1997
TL;DR: A motion planning framework is proposed that achieves this with the help of a space called the perceptual control manifold (PCM) defined on the product of the robot configuration space and an image-based feature space, showing how the task of intercepting a moving target can be mapped to the PCM.
Abstract: Visual feedback can play a crucial role in a dynamic robotic task such as the interception of a moving target. To utilize the feedback effectively, there is a need to develop robot motion planning techniques that also take into account properties of the sensed data. We propose a motion planning framework that achieves this with the help of a space called the perceptual control manifold (PCM) defined on the product of the robot configuration space and an image-based feature space. We show how the task of intercepting a moving target can be mapped to the PCM, using image feature trajectories of the robot end-effector and the moving target. This leads to the generation of motion plans that satisfy various constraints and optimality criteria derived from the robot kinematics, the control system, and the sensing mechanism. Specific interception tasks are analyzed to illustrate this vision-based planning technique.

42 citations


Proceedings ArticleDOI
20 Apr 1997
TL;DR: This work proposes the use of additional "exploratory motion" to considerably improve the estimation of the image Jacobian and studies the role of such exploratory motion in a visual servoing task.
Abstract: The calibration requirements for visual servoing can make it difficult to apply in many real-world situation One approach to image-based visual servoing without calibration is to dynamically estimate the image Jacobian and use it as the basis for control. However, with the normal motion of a robot towards the goal, the estimation of the image Jacobian deteriorates over time. We propose the use of additional "exploratory motion" to considerably improve the estimation of the image Jacobian. We study the role of such exploratory motion in a visual servoing task. Simulations and experiments with a 6-DOF robot are used to verify the practical feasibility of the approach.

39 citations


Journal ArticleDOI
TL;DR: In this article, a framework for sensor-based robot motion planning that uses learning to handle arbitrarily configured sensors and robots is presented, where the topology-representing-network algorithm is employed to learn a representation of the perceptual control manifold.
Abstract: Integration of sensing and motion planning plays a crucial role in autonomous robot operation. We present a framework for sensor-based robot motion planning that uses learning to handle arbitrarily configured sensors and robots. The theoretical basis of this approach is the concept of the perceptual control manifold that extends the notion of the robot configuration space to include sensor space. To overcome modeling uncertainty, the topology-representing-network algorithm is employed to learn a representation of the perceptual control manifold. By exploiting the topology-preserving features of the neural network, a diffusion-based path planning strategy leads to flexible obstacle avoidance. The practical feasibility of the approach is demonstrated on a pneumatically driven robot arm (SoftArm) using visual sensing.

38 citations


Proceedings ArticleDOI
14 Jul 1997
TL;DR: A visual computing environment is being developed which permits interactive modeling of biopolymers by linking a 3D molecular graphics program with an efficient molecular dynamics simulation program executed on remote high-performance parallel computers.
Abstract: Knowledge of the complex molecular structures of living cells is being accumulated at a tremendous rate Key technologies enabling this success have been, high performance computing and powerful molecular graphics applications, but the technology is beginning to seriously lag behind challenges posed by the size and number of new structures and by the emerging opportunities in drug design and genetic engineering A visual computing environment is being developed which permits interactive modeling of biopolymers by linking a 3D molecular graphics program with an efficient molecular dynamics simulation program executed on remote high-performance parallel computers The system will be ideally suited for distributed computing environments, by utilizing both local 3D graphics facilities and the peak capacity of high-performance computers for the purpose of interactive biomolecular modeling To create an interactive 3D environment three input methods will be explored: (1) a six degree of freedom "mouse" for controlling the space shared by the model and the user; (2) voice commands monitored through a microphone and recognized by a speech recognition interface; (3) hand gestures, detected through cameras and interpreted using computer vision techniques Controlling 3D graphics connected to real time simulations and the use of voice with suitable language semantics, as well as hand gestures, promise great benefits for many types of problem solving environments Our focus on structural biology takes advantage of existing sophisticated software, provides concrete objectives, defines a well-posed domain of tasks and offers a well-developed vocabulary for spoken communication

22 citations


Journal ArticleDOI
TL;DR: A novel neural network, called the self-organized invertible map (SOIM), that is capable of learning many-to-one functionals mappings in a self- organized and online fashion is proposed and derived and then experimentally verified using the active vision system.
Abstract: We propose a novel neural network, called the self-organized invertible map (SOIM), that is capable of learning many-to-one functionals mappings in a self-organized and online fashion. The design and performance of the SOIM are highlighted by learning a many-to-one functional mapping that exists in active vision for spatial representation of three-dimensional point targets. The learned spatial representation is invariant to changing camera configurations. The SOIM also possesses an invertible property that can be exploited for active vision. An efficient and experimentally feasible method was devised for learning this representation on a real active vision system. The proof of convergence during learning as well as conditions for invariance of the learned spatial representation are derived and then experimentally verified using the active vision system. We also demonstrate various active vision applications that benefit from the properties of the mapping learned by SOIM.

19 citations


Journal ArticleDOI
TL;DR: This work presents a two-stage design of a neurocontroller for the execution of saccades, in which explicit calibration of the kinematic and imaging parameters of the system is replaced with a self-organized learning scheme, thereby providing a flexible and efficient saccade control strategy.
Abstract: An important mechanism in active vision is that of fixating to different targets of interest in a scene. We present a two-stage design of a neurocontroller for the execution of saccades. The first stage is an "open loop" mode based on a learned spatial representation while the second stage is a closed-loop "visual servoing" mode. Explicit calibration of the kinematic and imaging parameters of the system is replaced with a self-organized learning scheme, thereby providing a flexible and efficient saccade control strategy. Experiments on the University of Illinois Active Vision System (UIAVS) are used to establish the feasibility of this approach.

17 citations


Proceedings ArticleDOI
07 Aug 1997
TL;DR: In this article, the authors describe an interactive tool for evaluating assembly sequences using the human-computer interface of augmented reality, which allows a manufacturing engineer to interact with the assembly planner while manipulating the real and virtual prototype components.
Abstract: This paper describes an interactive tool for evaluating assembly sequences using the novel human-computer interface of augmented reality. The goal is to be able to consider various sequencing alternatives at an early stage of the manufacturing design process by manipulating both virtual and real prototype components. Thus the mixed prototyping can enable a better intuition of the different constraints and factors involved in assembly design and evaluation. The assembly evaluation tool is based on two implemented systems for computer vision-based augmented reality and for assembly visualization. These existing systems are integrated into the current design, and would allow a manufacturing engineer to interact with the assembly planner while manipulating the real and virtual prototype components. Information from the assembly planner can be displayed directly superimposed on the real scene using a see-through head-mounted display as well as adjacent computer monitors. The current status of the implementation and the plans for future extensions are outlined.

Book ChapterDOI
01 Sep 1997
TL;DR: A framework for sensor-based motion planning of robotic manipulators using the Topology Representing Network algorithm to develop a learned representation of the Perceptual Control Manifold is developed.
Abstract: The goal of integrating sensors into robot motion planning has incited recent research efforts. The Perceptual Control Manifold serves this goal extending the notion of the robot configuration space to include sensor space. In this paper, we develop a framework for sensor-based motion planning of robotic manipulators using the Topology Representing Network algorithm to develop a learned representation of the Perceptual Control Manifold. The topology preserving features of the neural network lend themselves to yield, after learning, a diffusion-based path planning strategy for flexible obstacle avoidance. We demonstrate the capabilities of topology preserving maps using an industrial robot simulator and a pneumatically driven robot arm (SoftArm).

Proceedings ArticleDOI
07 Aug 1997
TL;DR: In this paper, the authors consider an approach for motion planning that incorporates visual servoing constraints into the computation of the motion plans, and propose a hierarchical representation of the high dimensional planning space involved, and a multistrategic heuristic search.
Abstract: The success of an autonomous assembly task relies on a motion planning system to generate a plan to accomplish the task, and on a sensor-based robot control system to ensure successful execution of the plan. The decoupling of sensor-based robot control from motion planning may yield undesirable motion plans that do not utilize the sensing effectively or consider sensor constraints. For example, a feasible collision-free path may be hard to traverse in practice because of inadequate sensor feedback in certain regions under uncertainty and sensor limitations. In this paper we consider an approach for motion planning that incorporates visual servoing constraints into the computation of the motion plans. The approach extends the notion of configuration space to include the corresponding sensor values. We propose a hierarchical representation of the high dimensional planning space involved, and a multistrategic heuristics search. This results in a practical motion planning scheme that is proven to be resolution-complete. Performance results are described for several robot manipulators with up to 6-DOF and under various sensing constraints.

Proceedings ArticleDOI
10 Jun 1997
TL;DR: This paper proposes a framework for sensor-based robot motion planning using the topology representing network algorithm to develop a learned representation of the perceptual control manifold.
Abstract: The perceptual control manifold is a concept that extends the notion of the robot configuration space to include sensor feedback for robot motion planning. In this paper, we propose a framework for sensor-based robot motion planning using the topology representing network algorithm to develop a learned representation of the perceptual control manifold. The topology preserving features of the neural network lend themselves to yield, after learning, a diffusion-based path planning strategy for flexible obstacle avoidance. Simulations on path control and flexible obstacle avoidance demonstrate the feasibility of this approach for motion planning and illustrate the potential for further robotic applications.