Afterwards only the world coordinate system has to be defined by marking the origin and the end of two axes of the world coordinate system. Therefore, position and orientation of the cameras are determined with respect to each other by swaying a small torch light for a few seconds in the designated interaction volume. The approach is based on the recognition of the position of the human hand in 3D space within a self-calibrated stereo system. The purpose of the tracking system is to recognize and to track three different static gestures of the user (pointing, opened hand and closed hand) to enable intuitive interaction with the scenario applications. Lenses with additional infrared light diodes (without infrared light filters) are used to ensure a bright reflec- tion of the human skin (see figure 1). Two Firewire (IEEE1394) cameras are connected to the computer feeding the system with grey-scaled images of the interaction volume in real time. A standard video beamer or a large plasma display is connected to the PC, displaying the application scenario in front of the interacting user. The equipment for the gesture recognition system consists of one single standard PC, which is used for both rendering of the scenario applications and gesture recognition and tracking. Due to the fact that no orientation of the hand is determined in our approach, we are able to reduce the classification problem to pure 2D feature extraction and to calculate the 3D position of the hand for interaction purposes using the center of gravity of the segmented hand only. Therefore, it is necessary to identify corresponding features in image pairs and to derive 3D features that are afterwards fitted to an underlying 3D hand model to classify different gestures. The approach is based on 2D feature extraction using various techniques based on geometric properties and template matching. uses a calibrated stereo system with two colour cameras to estimate the position and orientation of different hand postures in 3D. Therefore, the results necessarily depend on the correct reconstruction of the 3D pose of the hand using at least three cameras. Exemplarily, presented an approach for tracking the position and the orientation of four different postures of two hands in real-time by calculating the visual hulls of the hands on the GPU directly. Due to the high amount of different approaches and methods for hand pose estimation, we only consider approaches, that are capable of tracking the human hand in real-time and differentiating static hand postures. Elaborate reviews on vision-based hand pose estimation can be found in and. Therefore, hand gesture recognition systems can be divided into different tasks: Hand tracking, dynamic gesture recognition, static gesture recognition, sign language recognition and pointing. A primary goal of gesture recognition research for Human-Computer-Interaction is to create a system, which can identify specific human hand gestures and use them to convey information or for device control. gesture recognition in computer vision is an extensive area of research that encompasses anything from static pose estimation of the human hand to dynamic movements such as the recognition of sign languages.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |