scispace - formally typeset
Search or ask a question
Author

Anupam Agrawal

Bio: Anupam Agrawal is an academic researcher from Indian Institute of Information Technology, Allahabad. The author has contributed to research in topics: Gesture recognition & Gesture. The author has an hindex of 15, co-authored 91 publications receiving 2186 citations. Previous affiliations of Anupam Agrawal include University of Bedfordshire & Indian Institutes of Information Technology.


Papers
More filters
Journal ArticleDOI
TL;DR: An analysis of comparative surveys done in the field of gesture based HCI and an analysis of existing literature related to gesture recognition systems for human computer interaction by categorizing it under different key parameters are provided.
Abstract: As computers become more pervasive in society, facilitating natural human---computer interaction (HCI) will have a positive impact on their use. Hence, there has been growing interest in the development of new approaches and technologies for bridging the human---computer barrier. The ultimate aim is to bring HCI to a regime where interactions with computers will be as natural as an interaction between humans, and to this end, incorporating gestures in HCI is an important research area. Gestures have long been considered as an interaction technique that can potentially deliver more natural, creative and intuitive methods for communicating with our computers. This paper provides an analysis of comparative surveys done in this area. The use of hand gestures as a natural interface serves as a motivating force for research in gesture taxonomies, its representations and recognition techniques, software platforms and frameworks which is discussed briefly in this paper. It focuses on the three main phases of hand gesture recognition i.e. detection, tracking and recognition. Different application which employs hand gestures for efficient interaction has been discussed under core and advanced application domains. This paper also provides an analysis of existing literature related to gesture recognition systems for human computer interaction by categorizing it under different key parameters. It further discusses the advances that are needed to further improvise the present hand gesture recognition systems for future perspective that can be widely used for efficient human computer interaction. The main goal of this survey is to provide researchers in the field of gesture based HCI with a summary of progress achieved to date and to help identify areas where further research is needed.

1,338 citations

Journal ArticleDOI
TL;DR: This paper provides an overview of benchmark databases for activity recognition, the market analysis of video surveillance, and future directions to work on for this application.
Abstract: This paper provides a comprehensive survey for activity recognition in video surveillance. It starts with a description of simple and complex human activity, and various applications. The applications of activity recognition are manifold, ranging from visual surveillance through content based retrieval to human computer interaction. The organization of this paper covers all aspects of the general framework of human activity recognition. Then it summarizes and categorizes recent-published research progresses under a general framework. Finally, this paper also provides an overview of benchmark databases for activity recognition, the market analysis of video surveillance, and future directions to work on for this application.

378 citations

Journal ArticleDOI
TL;DR: This research effort centralizes on the efforts of implementing an application that employs computer vision algorithms and gesture recognition techniques which results in developing a low cost interface device for interacting with objects in virtual environment using hand gestures.
Abstract: Virtual environments have always been considered as a means for more visceral and efficient human computer interaction by a diversified range of applications. The spectrum of applications includes analysis of complex scientific data, medical training, military simulation, phobia therapy and virtual prototyping. Evolution of ubiquitous computing, current user interaction approaches with keyboard, mouse and pen are not sufficient for the still widening spectrum of Human computer interaction. Gloves and sensor based trackers are unwieldy, constraining and uncomfortable to use. Due to the limitation of these devices the useable command set based diligences is also limited. Direct use of hands as an input device is an innovative method for providing natural Human Computer Interaction which has its inheritance from textbased interfaces through 2D graphical-based interfaces, multimedia-supported interfaces, to full-fledged multi-participant Virtual Environment (VE) systems. Conceiving a future era of human-computer interaction with the implementations of 3D application where the user may be able to move and rotate objects simply by moving and rotating his hand - all without help of any input device. The research effort centralizes on the efforts of implementing an application that employs computer vision algorithms and gesture recognition techniques which in turn results in developing a low cost interface device for interacting with objects in virtual environment using hand gestures. The prototype architecture of the application comprises of a central computational module that applies the camshift technique for tracking of hands and its gestures. Haar like technique has been utilized as a classifier that is creditworthy for locating hand position and classifying gesture. The patterning of gestures has been done for recognition by mapping the number of defects that is formed in the hand with the assigned gestures. The virtual objects are produced using Open GL library. This hand gesture recognition technique aims to substitute the use of mouse for interaction with the virtual objects. This will be useful to promote controlling applications like virtual games, browsing images etc in virtual environment using hand gestures.

96 citations

Proceedings ArticleDOI
01 Dec 2011
TL;DR: The gesture based interaction interface being proposed here can be substantially applied towards many applications like Virtual Reality, Sign Language and Games, Though the present paper considered games as the application domain.
Abstract: Hand gesture recognition systems for virtual reality applications provides the users an enhanced interaction experience as it integrates the virtual and the real world object Growth in virtual environments based upon computer systems and development of user interfaces influence the changes in the Human-Computer Interaction (HCI) Gesture recognition based interaction interface, endow with more realistic and immersive interaction compared to the traditional devices The system enables a physically realistic mode of interaction to the virtual environment The Hand gesture recognition system based interface proposed and implemented in this paper consists of a detection, tracking and recognition module For the implementation of these modules various image processing algorithms as Camshift, Lucas Kanade, Haar like features etc has been employed Comprehensive user acceptability has been considered to exhibit the accuracy, usefulness and ease of use to the proposed and implemented hand gesture recognition system Hand gesture communication based vocabulary offers many variations ranging from simple action of using our finger to point at to using hands for moving objects around to the rather complex one like expression of the feelings The proposed hand gesture recognition system offers intensions to traditional input devices for interaction with the virtual environments The gesture based interaction interface being proposed here can be substantially applied towards many applications like Virtual Reality, Sign Language and Games Though the present paper considered games as the application domain

80 citations

01 Jan 2006
TL;DR: A new rendering algorithm for the combined display of multiresolution 3D terrain and polyline vector data representing the geographical entities such as roads, state or country boundaries etc is proposed.
Abstract: Interactive three-dimensional (3D) visualization of very large-scale grid digital elevation models coupled with corresponding high-resolution remote-sensing phototexture images is a hard problem. The graphics load must be controlled by an adaptive view-dependent surface triangulation and by taking advantage of different levels of detail (LODs) using multiresolution modeling of terrain geometry. Furthermore, the display of vector data over the level of detail terrain models is a challenging task. In this case, rendering artifacts are likely to occur until vector data is mapped consistently and exactly to the current level-of-detail of terrain geometry. In our prior work, we have developed a view-dependent dynamic block-based LOD mesh simplification scheme and out-ofcore management of large terrain data for real-time rendering on desktop PCs. In this paper, we have proposed a new rendering algorithm for the combined display of multiresolution 3D terrain and polyline vector data representing the geographical entities such as roads, state or country boundaries etc. Our algorithm for multiresolution modeling of vector data allows the system to adapt the visual mapping without rendering artifacts to the context and the user needs while maintaining interactive frame rates. The algorithms have been implemented using Visual C++ and OpenGL 3D API and successfully tested on different real-world terrain raster and vector data sets.

63 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, the authors present recent advancements in the development of flexible and stretchable strain sensors, including skin-mountable and wearable strain sensors for personalized health-monitoring, human motion detection, human-machine interfaces, soft robotics, and so forth.
Abstract: There is a growing demand for flexible and soft electronic devices. In particular, stretchable, skin-mountable, and wearable strain sensors are needed for several potential applications including personalized health-monitoring, human motion detection, human-machine interfaces, soft robotics, and so forth. This Feature Article presents recent advancements in the development of flexible and stretchable strain sensors. The article shows that highly stretchable strain sensors are successfully being developed by new mechanisms such as disconnection between overlapped nanomaterials, crack propagation in thin films, and tunneling effect, different from traditional strain sensing mechanisms. Strain sensing performances of recently reported strain sensors are comprehensively studied and discussed, showing that appropriate choice of composite structures as well as suitable interaction between functional nanomaterials and polymers are essential for the high performance strain sensing. Next, simulation results of piezoresistivity of stretchable strain sensors by computational models are reported. Finally, potential applications of flexible strain sensors are described. This survey reveals that flexible, skin-mountable, and wearable strain sensors have potential in diverse applications while several grand challenges have to be still overcome.

2,154 citations

Journal ArticleDOI
Morteza Amjadi1, Aekachan Pichitpajongkit1, Sangjun Lee1, Seunghwa Ryu1, Inkyu Park1 
29 Apr 2014-ACS Nano
TL;DR: The applicability of the high performance strain sensors based on the nanocomposite of silver nanowire network and PDMS elastomer in the form of the sandwich structure is demonstrated by fabricating a glove integrated with five strain sensors for the motion detection of fingers and control of an avatar in the virtual environment.
Abstract: The demand for flexible and wearable electronic devices is increasing due to their facile interaction with human body. Flexible, stretchable and wearable sensors can be easily mounted on clothing or directly attached onto the body. Especially, highly stretchable and sensitive strain sensors are needed for the human motion detection. Here, we report highly flexible, stretchable and sensitive strain sensors based on the nanocomposite of silver nanowire (AgNW) network and PDMS elastomer in the form of the sandwich structure (i.e., AgNW thin film embedded between two layers of PDMS). The AgNW network-elastomer nanocomposite based strain sensors show strong piezoresistivity with tunable gauge factors in the ranges of 2 to 14 and a high stretchability up to 70%. We demonstrate the applicability of our high performance strain sensors by fabricating a glove integrated with five strain sensors for the motion detection of fingers and control of an avatar in the virtual environment.

1,837 citations

Journal ArticleDOI
TL;DR: An analysis of comparative surveys done in the field of gesture based HCI and an analysis of existing literature related to gesture recognition systems for human computer interaction by categorizing it under different key parameters are provided.
Abstract: As computers become more pervasive in society, facilitating natural human---computer interaction (HCI) will have a positive impact on their use. Hence, there has been growing interest in the development of new approaches and technologies for bridging the human---computer barrier. The ultimate aim is to bring HCI to a regime where interactions with computers will be as natural as an interaction between humans, and to this end, incorporating gestures in HCI is an important research area. Gestures have long been considered as an interaction technique that can potentially deliver more natural, creative and intuitive methods for communicating with our computers. This paper provides an analysis of comparative surveys done in this area. The use of hand gestures as a natural interface serves as a motivating force for research in gesture taxonomies, its representations and recognition techniques, software platforms and frameworks which is discussed briefly in this paper. It focuses on the three main phases of hand gesture recognition i.e. detection, tracking and recognition. Different application which employs hand gestures for efficient interaction has been discussed under core and advanced application domains. This paper also provides an analysis of existing literature related to gesture recognition systems for human computer interaction by categorizing it under different key parameters. It further discusses the advances that are needed to further improvise the present hand gesture recognition systems for future perspective that can be widely used for efficient human computer interaction. The main goal of this survey is to provide researchers in the field of gesture based HCI with a summary of progress achieved to date and to help identify areas where further research is needed.

1,338 citations

Journal ArticleDOI
TL;DR: McNeill as discussed by the authors discusses what Gestures reveal about Thought in Hand and Mind: What Gestures Reveal about Thought. Chicago and London: University of Chicago Press, 1992. 416 pp.
Abstract: Hand and Mind: What Gestures Reveal about Thought. David McNeill. Chicago and London: University of Chicago Press, 1992. 416 pp.

988 citations

01 Jan 1995
TL;DR: In this paper, the authors report results from a redrawn version of the MRT and for alternate versions of the test and find that males perform better than females, and students drawn from the physical sciences and social sciences perform better.
Abstract: The available versions of the Vandenberg and Kuse (1978) Mental Rotations Test (MRT) have physically deteriorated because only copies of copies are available. We report results from a redrawn version of the MRT and for alternate versions of the test. Males perform better than females, and students drawn from the physical sciences perform better than students drawn from the social sciences and humanities, confirming other reports with the original version of the MRT. Subjects find it very hard to perform the MRT when stimuli require rotation along both the top/bottom axis and the left/right axis. The magnitude of effect sizes for sex (which account, on average, for some 20% of the variance) does not increase with increasing difficulty of the task. Minimal strategy effects were observed and females did not perform differently during the menstrual period as opposed to the days between the menstrual periods. Practice effects are dramatic, confirming other reports with the original MRT, and can also be shown to be powerful in a transfer for practice paradigm, where test and retest involve different versions of the MRT. Main effects of handedness on MRT performance were not found.

788 citations