scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Implementation of Low-Cost Vision-Based Gesture Recognition Systems Based on FPGA Approach

TL;DR: This study proposes an FPGA-based gesture recognition system by rewriting the largest computational complexity of optical flow to perform parallel processing architecture, and presents algorithm designs for the median filter, skin color detection, optical flow for hand detection, and the Kalman filter for tracking.
Abstract: With rapid advancement in technology, numerous applications are required, such as for face and gesture recognition. However, various methods previous researchers have developed and presented suffer from limitations. Therefore, this study proposes an FPGA-based gesture recognition system by rewriting the largest computational complexity of optical flow to perform parallel processing architecture, and presents algorithm designs for the median filter, skin color detection, optical flow for hand detection, and the Kalman filter for tracking. The advantage that separates the proposed method from previous approaches is its foundation on optical flow technology, which renders the hand shape unlimited in gesture recognition. The results indicate that the cost-effective FPGA system under a low clock rate may be used to realize the proposed system. Using the FPGA system, gesture detection and recognition can achieve 30 frames per second, and the system software can subsequently schedule all tasks during processing. This study provides a simple background in an applied environment, which can be used for consumer applications, such as entertainment or medical applications, with non-touch control equipment.
Citations
More filters
Proceedings ArticleDOI
14 May 2015
TL;DR: An improved hand segmentation algorithm combined YCbCr color space and region growing algorithm is used for valid region separation, and morphological operations including erosion and dilation are applied to this system in order to remove extra noise.
Abstract: This paper presents the implementation for hand tacking and gesture recognition on FPGA For hand tracking, an improved hand segmentation algorithm combined YCbCr color space and region growing algorithm is used for valid region separation, and then morphological operations including erosion and dilation are applied to this system in order to remove extra noise The three-frame-difference motion detection method is used to detect moving hand Finally the centroid of hand can be located For gesture recognition, we develop a hand model based on finger state, converting pose recognition to finger state detection The hand recognition system has been entirely implemented on Cyclone II FPGA

6 citations


Cites background from "Implementation of Low-Cost Vision-B..."

  • ...It can prevent the centroid box from moving too fast when there is an abrupt change in the movement of hand region [21]....

    [...]

Proceedings ArticleDOI
26 May 2013
TL;DR: This work presents a novel interaction approach based on a gesture recognition system using a Microsoft Kinect sensor that enables intuitive interaction with the multimedia content.
Abstract: In this work we present a novel interaction approach based on a gesture recognition system using a Microsoft Kinect sensor. Gestures are defined and interpreted in order to activate controls on a media device. This natural interface enables intuitive interaction with the multimedia content. The depth sensor observes the scene to detect a request for control in the form of a gesture. Then, the application assigns a control for the media system. The application is tested under various scenarios and proved the reliability and the effectiveness of the proposed approach.

6 citations


Cites background from "Implementation of Low-Cost Vision-B..."

  • ...Gesture recognition approaches were interested in hand motions by developing vision-based system to recognize the human gestures [1] [2] [3] [4] [5] [6] [7]....

    [...]

Proceedings ArticleDOI
01 Jan 2017
TL;DR: The image acquisition is shown and also the image segmentation techniques are discussed and the OV7670 CMOS camera chip sensor is used that is attached to FPGA DE-1 board to capture the gesture.
Abstract: Gesture recognition has lured everyone's attention as a new generation of HCI and visual input mode. FPGA presents a better overall performance and flexibility than DSP for parallel processing and pipelined operations in order to process high resolution and high frame rate video processing. Vision-based gesture recognition technique is the best way to recognize the gesture. In gesture recognition, the image acquisition and image segmentation is there. In this paper, the image acquisition is shown and also the image segmentation techniques are discussed. In this, to capture the gesture the OV7670 CMOS camera chip sensor is used that is attached to FPGA DE-1 board. By using this gesture recognition, we can control any application in a non-tangible way.

4 citations


Cites methods from "Implementation of Low-Cost Vision-B..."

  • ...FPGA is used to recognize gesture position and it assigns different functionality to the different pre-defined gesture position....

    [...]

  • ...The advantage of using FPGA [12] as a development platform is FPGA has a higher computing platform than PC to perform dataintensive video processing responsibilities with realtime requirements....

    [...]

  • ...Image Acquisition: To acquire the image, first of all, FPGA DE1 board and an OV7670 camera is connected through GPIO port of DE-1 board as shown in the figure 1....

    [...]

  • ...In Gesture recognition task, different techniques were used like SVM, HMM, YCbCr, Kalman filter, etc. to acquire the input gesture camera and FPGA board is used....

    [...]

  • ...Keywords – FPGA; HMM; YCbCr; Kalman filter....

    [...]

Proceedings ArticleDOI
01 Nov 2019
TL;DR: The UKF is combined with the upgraded camshift process and performance improvement in recognition of the head gestures is described, which deals with problems in inside as well as out-of-doors eco-friendly surroundings.
Abstract: This paper describes performance improvement in recognition of the head gestures. The surviving traditional head gesture recognition systems does not deals with these environmental conditions. To overcome these difficulties, the UKF is combined with the upgraded camshift process. Simulation results concludes proposed system deals with problems in inside as well as out-of-doors eco-friendly surroundings.

4 citations

Proceedings ArticleDOI
01 Oct 2013
TL;DR: This article presents the different steps in the design of such a gestural servoing system, starting with hand gesture recognition through its interpretation in order to manipulate the Mentor robot (virtual and real), in real time.
Abstract: In the field of Man-Machine interaction, gestural communication is expected to play a more and more important role due to its direct, natural character and its many potential uses. Our work has been chosen in the same context as robots control through gestural interfaces specifically using hand gesture. This article presents the different steps in the design of such a gestural servoing system, starting with hand gesture recognition through its interpretation in order to manipulate the Mentor robot (virtual and real), in real time. This gesture recognition system encompasses gesture acquisition, segmentation and identification using principal component analysis. Once the gesture has been recognized, it is operated to control the robot using two techniques: articular command and operational command.
References
More filters
Journal ArticleDOI
Lawrence R. Rabiner1
01 Feb 1989
TL;DR: In this paper, the authors provide an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and give practical details on methods of implementation of the theory along with a description of selected applications of HMMs to distinct problems in speech recognition.
Abstract: This tutorial provides an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and gives practical details on methods of implementation of the theory along with a description of selected applications of the theory to distinct problems in speech recognition. Results from a number of original sources are combined to provide a single source of acquiring the background required to pursue further this area of research. The author first reviews the theory of discrete Markov chains and shows how the concept of hidden states, where the observation is a probabilistic function of the state, can be used effectively. The theory is illustrated with two simple examples, namely coin-tossing, and the classic balls-in-urns system. Three fundamental problems of HMMs are noted and several practical techniques for solving these problems are given. The various types of HMMs that have been studied, including ergodic as well as left-right models, are described. >

21,819 citations

Journal ArticleDOI
TL;DR: In this paper, a method for finding the optical flow pattern is presented which assumes that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image, and an iterative implementation is shown which successfully computes the Optical Flow for a number of synthetic image sequences.

10,727 citations

Proceedings ArticleDOI
12 Nov 1981
TL;DR: In this article, a method for finding the optical flow pattern is presented which assumes that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image, and an iterative implementation is shown which successfully computes the Optical Flow for a number of synthetic image sequences.
Abstract: Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. A method for finding the optical flow pattern is presented which assumes that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image. An iterative implementation is shown which successfully computes the optical flow for a number of synthetic image sequences. The algorithm is robust in that it can handle image sequences that are quantized rather coarsely in space and time. It is also insensitive to quantization of brightness levels and additive noise. Examples are included where the assumption of smoothness is violated at singular points or along lines in the image.

8,078 citations

Journal ArticleDOI
01 Apr 1999
TL;DR: A basic requirement for an autonomous mobile robot is its capability to elaborate the sensor measures to localize itself with respect to a coordinate system, and the data provided by odometric and sonar sensors are fused together by means of an extended Kalman filter.
Abstract: A basic requirement for an autonomous mobile robot is its capability to elaborate the sensor measures to localize itself with respect to a coordinate system. To this purpose, the data provided by odometric and sonar sensors are here fused together by means of an extended Kalman filter. The performance of the filter is improved by an online adjustment of the input and measurement noise covariances obtained by a suitably defined estimation algorithm.

315 citations


"Implementation of Low-Cost Vision-B..." refers methods in this paper

  • ...Kalman filter [11]-[12] is a common tracking algorithm and it is sutiable to implement on FPGA....

    [...]

Proceedings ArticleDOI
17 Apr 2000
TL;DR: The Streams-C system is described, which has been developed to support stream-oriented computation on FPGA-based parallel computers and includes a functional simulation environment based on POSIX threads, allowing the programmer to simulate the collection of parallel processes and their communication at the functional level.
Abstract: Stream oriented processing is an important methodology used in FPGA-based parallel processing. Characteristics of stream-oriented computing include high-data-rate flow of one or more data sources; fixed size, small stream payload (one byte to one word); compute-intensive operations, usually low precision fixed point, on the data stream; access to small local memories holding coefficients and other constants; and occasional synchronization between computational phases. We describe language constructs, compiler technology, and hardware/software libraries embodying the Streams-C system which has been developed to support stream-oriented computation on FPGA-based parallel computers. The language is implemented as a small set of library functions callable from a C language program. The Streams-C compiler synthesizes hardware circuits for multiple FPGAs as well as a multi-threaded software program for the control processor. Our system includes a functional simulation environment based on POSIX threads, allowing the programmer to simulate the collection of parallel processes and their communication at the functional level. Finally we present an application written both in Streams-C and hand-coded in VHDL. Compared to the hand-crafted design, the Streams-C-generated circuit takes 3x the area and runs at 1/2 the clock rate. In terms of time to market, the hand-done design took a month to develop by an experienced hardware developer. The Streams-C design rook a couple of days, for a productivity increase of 10x.

292 citations


"Implementation of Low-Cost Vision-B..." refers methods in this paper

  • ...There are several different systematic approaches for FPGA, such as Stream-C compiler [4], Handel-C [5] or AccelFPGA [6]....

    [...]