ログイン
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 020 学位論文
  2. 複合科学研究科
  3. 17 情報学専攻

Vision and Knowledge-Based Gesture Recognition for Human-Robot Interaction

https://ir.soken.ac.jp/records/848
https://ir.soken.ac.jp/records/848
1ce606f8-1267-444e-99af-a8faca006190
名前 / ファイル ライセンス アクション
甲952_要旨.pdf 要旨・審査要旨 (309.1 kB)
Item type 学位論文 / Thesis or Dissertation(1)
公開日 2010-02-22
タイトル
タイトル Vision and Knowledge-Based Gesture Recognition for Human-Robot Interaction
タイトル
タイトル Vision and Knowledge-Based Gesture Recognition for Human-Robot Interaction
言語 en
言語
言語 eng
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_46ec
資源タイプ thesis
著者名 HASANUZZAMAN, Md.

× HASANUZZAMAN, Md.

HASANUZZAMAN, Md.

Search repository
フリガナ ハッサヌザーマン, モハマド

× ハッサヌザーマン, モハマド

ハッサヌザーマン, モハマド

Search repository
著者 HASANUZZAMAN, Md.

× HASANUZZAMAN, Md.

en HASANUZZAMAN, Md.

Search repository
学位授与機関
学位授与機関名 総合研究大学院大学
学位名
学位名 博士(情報学)
学位記番号
内容記述タイプ Other
内容記述 総研大甲第952号
研究科
値 複合科学研究科
専攻
値 17 情報学専攻
学位授与年月日
学位授与年月日 2006-03-24
学位授与年度
値 2005
要旨
内容記述タイプ Other
内容記述 Recently, human-robot symbiotic systems have been studied extensively due to the increasing demand of welfare service for the aged and handicapped under the situation of decreasing of the younger generation. In the future, it will be difficult to provide help to the aged and disable persons such as taking care, nursing, informing important information, recreation, etc., by trained human. To build human-robot symbiotic society, where robots are able to support elderly or disable people, the robot should be capable of recognizing user, gesture, gaze, speech and text commands. <br /> To use gestures is a natural way for human to interact with robot. However, gestures are varying among individuals or varying from instance to instance for a given individual. The hand shape and human skin-color are different for different persons. The gesture meanings are also different in different cultures. A significant issue in building visual gesture-based human-robot interactive system is to utilize user-specific knowledge for gesture interpretation and adapt to new gestures and users over time. <br /> In this work, we propose a vision and knowledge-based gesture recognition system for human-robot interaction. In the proposed method, a frame-based knowledge model is defined for the person-centric gesture interpretation and human-robot interaction. In this knowledge model, necessary frames are defined for the known users, robots, poses, gestures and robot behaviors. The system first detects a human face using a combination of template-based and feature-invariant pattern matching approaches and identifies the user using the eigenface method. Then, using the skin-color information of the identified user three larger skin-like regions are segmented from the YIQ color spaces, and face and hand poses are classified by the subspace method. The system is capable of recognizing static gestures comprised of face and hand poses, and dynamic gestures of face or hand in motion. It is implemented using the frame-based Software Platform for Agent and Knowledge Management (SPAK). Known gestures are defined as frames in SPAK knowledge base using the combination of face and hand pose frames. If the required combination of the pose components is found then corresponding gesture frame will be activated. Dynamic gestures are recognized using the state transition diagram of face poses in a sequence of time steps. The person-centric interpretation of gesture is achieved using the frame-based approach and the interaction between humans and robots is determined using the hierarchical frame-based knowledge model. <br /> In this work adaptation method for new users, hand poses, gestures and robot behaviors by combining computer vision and knowledge-based approaches is proposed. In this method one or more clusters are built to learn new user or new pose. New robot behavior can be learned according to the generalization of multiple occurrence of same gesture with minimum user interaction. An experimental human-robot interaction system comprised of an entertainment robot ‘Aibo’ and a humanoid robot ‘Robovie’ has demonstrated the effectiveness of the proposed method. In this research, we also compare three pattern-matching approaches for face and hand poses classification: general PCA, person-specific subspace and pose-specific subspace methods. In the pose-specific subspace method, training images are grouped based on pose and eigenvectors for each pose are generated separately. In the person-specific subspace method, hand poses are grouped based on each person and for each person one PCA is used. <br /> This dissertation makes three major contributions in the filed of human-robot symbiotic systems. First, the proposed system utilizes user specific knowledge for person-centric gesture interpretation that supports cultural variation of gesture meanings. The knowledge-based approach allows user to define or edit gestures and robot behaviors frames easily using SPAK knowledge Editor. The segmented skin regions are more noise free for known person because of using person-centric threshold values for the YIQ components. The performance of person-specific subspace method is better than the general PCA method in the same environment for face and hand pose classification. <br /> Second, this thesis proposes a new user and hand pose adaptation method using a multi-cluster based incremental learning method, which supports different orientations of the images of a person’s face and different orientations of same hand poses. <br /> Third, this work combines several system components as an integrated system by means of a knowledge-based software platform SPAK. The system components include vision based face and gesture recognizer, knowledge manager, and various robot software and hardware components. We have demonstrated that it is simple and intuitive to develop a natural human-robot interactive system by integrating computer vision with a knowledge-based platform in the internet-based distributed environment. <br /> The ability to understand hand gestures and person-centric meanings will improve the naturalness of human interaction with robot, and allow the user to communicate in complex tasks without using tedious sets of detailed instructions. We believe that integration of vision and knowledge-based approaches is a major step towards achieving intelligent and natural human-robot interaction. <br />
所蔵
値 有
戻る
0
views
See details
Views

Versions

Ver.1 2023-06-20 16:10:19.180239
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR 2.0
  • OAI-PMH JPCOAR 1.0
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3