{"created":"2023-06-20T13:20:47.687594+00:00","id":848,"links":{},"metadata":{"_buckets":{"deposit":"7c365972-6f79-45d1-8e4b-5c5704bea49a"},"_deposit":{"created_by":1,"id":"848","owners":[1],"pid":{"revision_id":0,"type":"depid","value":"848"},"status":"published"},"_oai":{"id":"oai:ir.soken.ac.jp:00000848","sets":["2:429:19"]},"author_link":["0","0","0"],"item_1_creator_2":{"attribute_name":"著者名","attribute_type":"creator","attribute_value_mlt":[{"creatorNames":[{"creatorName":"HASANUZZAMAN, Md. "}],"nameIdentifiers":[{}]}]},"item_1_creator_3":{"attribute_name":"フリガナ","attribute_type":"creator","attribute_value_mlt":[{"creatorNames":[{"creatorName":"ハッサヌザーマン, モハマド "}],"nameIdentifiers":[{}]}]},"item_1_date_granted_11":{"attribute_name":"学位授与年月日","attribute_value_mlt":[{"subitem_dategranted":"2006-03-24"}]},"item_1_degree_grantor_5":{"attribute_name":"学位授与機関","attribute_value_mlt":[{"subitem_degreegrantor":[{"subitem_degreegrantor_name":"総合研究大学院大学"}]}]},"item_1_degree_name_6":{"attribute_name":"学位名","attribute_value_mlt":[{"subitem_degreename":"博士(情報学)"}]},"item_1_description_12":{"attribute_name":"要旨","attribute_value_mlt":[{"subitem_description":"Recently, human-robot symbiotic systems have been studied extensively due to the increasing demand of welfare service for the aged and handicapped under the situation of decreasing of the younger generation. In the future, it will be difficult to provide help to the aged and disable persons such as taking care, nursing, informing important information, recreation, etc., by trained human. To build human-robot symbiotic society, where robots are able to support elderly or disable people, the robot should be capable of recognizing user, gesture, gaze, speech and text commands.
 To use gestures is a natural way for human to interact with robot. However, gestures are varying among individuals or varying from instance to instance for a given individual. The hand shape and human skin-color are different for different persons. The gesture meanings are also different in different cultures. A significant issue in building visual gesture-based human-robot interactive system is to utilize user-specific knowledge for gesture interpretation and adapt to new gestures and users over time.
 In this work, we propose a vision and knowledge-based gesture recognition system for human-robot interaction. In the proposed method, a frame-based knowledge model is defined for the person-centric gesture interpretation and human-robot interaction. In this knowledge model, necessary frames are defined for the known users, robots, poses, gestures and robot behaviors. The system first detects a human face using a combination of template-based and feature-invariant pattern matching approaches and identifies the user using the eigenface method. Then, using the skin-color information of the identified user three larger skin-like regions are segmented from the YIQ color spaces, and face and hand poses are classified by the subspace method. The system is capable of recognizing static gestures comprised of face and hand poses, and dynamic gestures of face or hand in motion. It is implemented using the frame-based Software Platform for Agent and Knowledge Management (SPAK). Known gestures are defined as frames in SPAK knowledge base using the combination of face and hand pose frames. If the required combination of the pose components is found then corresponding gesture frame will be activated. Dynamic gestures are recognized using the state transition diagram of face poses in a sequence of time steps. The person-centric interpretation of gesture is achieved using the frame-based approach and the interaction between humans and robots is determined using the hierarchical frame-based knowledge model.
 In this work adaptation method for new users, hand poses, gestures and robot behaviors by combining computer vision and knowledge-based approaches is proposed. In this method one or more clusters are built to learn new user or new pose. New robot behavior can be learned according to the generalization of multiple occurrence of same gesture with minimum user interaction. An experimental human-robot interaction system comprised of an entertainment robot ‘Aibo’ and a humanoid robot ‘Robovie’ has demonstrated the effectiveness of the proposed method. In this research, we also compare three pattern-matching approaches for face and hand poses classification: general PCA, person-specific subspace and pose-specific subspace methods. In the pose-specific subspace method, training images are grouped based on pose and eigenvectors for each pose are generated separately. In the person-specific subspace method, hand poses are grouped based on each person and for each person one PCA is used.
 This dissertation makes three major contributions in the filed of human-robot symbiotic systems. First, the proposed system utilizes user specific knowledge for person-centric gesture interpretation that supports cultural variation of gesture meanings. The knowledge-based approach allows user to define or edit gestures and robot behaviors frames easily using SPAK knowledge Editor. The segmented skin regions are more noise free for known person because of using person-centric threshold values for the YIQ components. The performance of person-specific subspace method is better than the general PCA method in the same environment for face and hand pose classification.
 Second, this thesis proposes a new user and hand pose adaptation method using a multi-cluster based incremental learning method, which supports different orientations of the images of a person’s face and different orientations of same hand poses.
 Third, this work combines several system components as an integrated system by means of a knowledge-based software platform SPAK. The system components include vision based face and gesture recognizer, knowledge manager, and various robot software and hardware components. We have demonstrated that it is simple and intuitive to develop a natural human-robot interactive system by integrating computer vision with a knowledge-based platform in the internet-based distributed environment.
 The ability to understand hand gestures and person-centric meanings will improve the naturalness of human interaction with robot, and allow the user to communicate in complex tasks without using tedious sets of detailed instructions. We believe that integration of vision and knowledge-based approaches is a major step towards achieving intelligent and natural human-robot interaction.
","subitem_description_type":"Other"}]},"item_1_description_7":{"attribute_name":"学位記番号","attribute_value_mlt":[{"subitem_description":"総研大甲第952号","subitem_description_type":"Other"}]},"item_1_select_14":{"attribute_name":"所蔵","attribute_value_mlt":[{"subitem_select_item":"有"}]},"item_1_select_8":{"attribute_name":"研究科","attribute_value_mlt":[{"subitem_select_item":"複合科学研究科"}]},"item_1_select_9":{"attribute_name":"専攻","attribute_value_mlt":[{"subitem_select_item":"17 情報学専攻"}]},"item_1_text_10":{"attribute_name":"学位授与年度","attribute_value_mlt":[{"subitem_text_value":"2005"}]},"item_creator":{"attribute_name":"著者","attribute_type":"creator","attribute_value_mlt":[{"creatorNames":[{"creatorName":"HASANUZZAMAN, Md. ","creatorNameLang":"en"}],"nameIdentifiers":[{}]}]},"item_files":{"attribute_name":"ファイル情報","attribute_type":"file","attribute_value_mlt":[{"accessrole":"open_date","date":[{"dateType":"Available","dateValue":"2016-02-17"}],"displaytype":"simple","filename":"甲952_要旨.pdf","filesize":[{"value":"309.1 kB"}],"format":"application/pdf","licensetype":"license_11","mimetype":"application/pdf","url":{"label":"要旨・審査要旨","url":"https://ir.soken.ac.jp/record/848/files/甲952_要旨.pdf"},"version_id":"19fda9e0-18a6-435c-a6ac-b237b71b1251"}]},"item_language":{"attribute_name":"言語","attribute_value_mlt":[{"subitem_language":"eng"}]},"item_resource_type":{"attribute_name":"資源タイプ","attribute_value_mlt":[{"resourcetype":"thesis","resourceuri":"http://purl.org/coar/resource_type/c_46ec"}]},"item_title":"Vision and Knowledge-Based Gesture Recognition for Human-Robot Interaction","item_titles":{"attribute_name":"タイトル","attribute_value_mlt":[{"subitem_title":"Vision and Knowledge-Based Gesture Recognition for Human-Robot Interaction"},{"subitem_title":"Vision and Knowledge-Based Gesture Recognition for Human-Robot Interaction","subitem_title_language":"en"}]},"item_type_id":"1","owner":"1","path":["19"],"pubdate":{"attribute_name":"公開日","attribute_value":"2010-02-22"},"publish_date":"2010-02-22","publish_status":"0","recid":"848","relation_version_is_last":true,"title":["Vision and Knowledge-Based Gesture Recognition for Human-Robot Interaction"],"weko_creator_id":"1","weko_shared_id":-1},"updated":"2023-06-20T16:10:20.061596+00:00"}