{"created":"2023-06-20T13:22:30.321598+00:00","id":3143,"links":{},"metadata":{"_buckets":{"deposit":"db360c33-8efb-4870-8dfa-0e0dc891b8d6"},"_deposit":{"created_by":21,"id":"3143","owners":[21],"pid":{"revision_id":0,"type":"depid","value":"3143"},"status":"published"},"_oai":{"id":"oai:ir.soken.ac.jp:00003143","sets":["2:429:19"]},"author_link":["228","229","227"],"item_1_creator_2":{"attribute_name":"著者名","attribute_type":"creator","attribute_value_mlt":[{"creatorNames":[{"creatorName":"高橋, 正樹"}],"nameIdentifiers":[{}]}]},"item_1_creator_3":{"attribute_name":"フリガナ","attribute_type":"creator","attribute_value_mlt":[{"creatorNames":[{"creatorName":"タカハシ, マサキ"}],"nameIdentifiers":[{}]}]},"item_1_date_granted_11":{"attribute_name":"学位授与年月日","attribute_value_mlt":[{"subitem_dategranted":"2012-03-23"}]},"item_1_degree_grantor_5":{"attribute_name":"学位授与機関","attribute_value_mlt":[{"subitem_degreegrantor":[{"subitem_degreegrantor_name":"総合研究大学院大学"}]}]},"item_1_degree_name_6":{"attribute_name":"学位名","attribute_value_mlt":[{"subitem_degreename":"博士(情報学)"}]},"item_1_description_12":{"attribute_name":"要旨","attribute_value_mlt":[{"subitem_description":"   近年では,個人が所有するPCやTVにもカメラが搭載されるようになり,映像を用いたコミュニケーションが一般的となっている.またインターネット上には大量の映像が溢れ,様々な映像を誰もが視聴できる環境が整っている.街中にはいたるところに監視カメラが設置され,屋内外の映像を常時記録している.このように,現代では映像が様々な形で日常生活の中に浸透しており,簡単にアクセス可能な身近なメディアとなっている.\r\n   本論文はこのような実環境における映像を対象とした,人物動作認識技術の高度化を提案するものである.生活環境への映像の普及に伴い,ユーザ動作による機器操作,特定動作をクエリとした映像検索など,人物動作認識技術に対する期待は高まっており,その需要も多岐にわたる.しかし実環境における映像は撮影条件や被写体動作が多様であり,安定した解析が困難であることが多い.本研究はこれらの課題を確認するとともに,頑健な人物動作認識技術の確立を目指すものである.あわせて,人物動作から動作者の意図や内部状態までも理解することを目指した.\r\n   第1章では本研究の背景として,日常生活における人物動作認識への社会的ニーズについて述べる.また人物動作の多様性に対処するため,意図の強さに基づく人物動作の分類を行う.分類した各動作の認識技術に対するニーズを確認するとともに,その実現に向けた課題を検討する.最後に,映像解析による一般行動認識技術についてまとめ,関連技術の現状を確認する.\r\n   第2章では意思伝達動作であるジェスチャに焦点をあて,その認識手法について検討する.近い将来の大画面・高解像度のTV視聴環境では,リモコンに代わる新たなマンマシンインタフェースが求められている.中でも映像解析による人物ジェスチャ認識は接触型デバイスが不要であり,次世代TVの特徴である没入感を損なうことなく操作できるため,新たなインタフェースとして期待を集めている.またジェスチャによる操作は,映像コンテンツ内オブジェクトとの自然なインタラクションを実現するうえでも有効である.本章では,はじめに次世代TV視聴環境でのジェスチャ認識における要件を考察する.続いて,ジェスチャ認識における先行研究を紹介し,本章での研究目的を明確にする.具体的には,次世代TV視聴環境での対話型ジェスチャ認識の実現へ向け,奥行き情報の利用方法について検討する.またユーザの自然な動作の認識を実現するため,長期の時間情報を含む画像特徴を検討する.続いて,これらの検討を踏まえた新たなジェスチャ認識手法を提案する.最後に,様々な実験を通して提案手法を評価し,次世代TV視聴環境でのインタフェースとしての有効性を確認するとともに,今後の拡張性を考察する.\r\n   第3章では人物の一般行動に焦点をあてる.人混みで混雑した実環境での監視映像を対象とし,混雑映像から一般行動を頑健に認識する手法を提案する.現代では屋内・屋外問わず監視カメラが普及しているが,その映像の確認はほとんどの場合人間によって行われている.膨大な映像量に対する作業者の数は少なく,非効率な監視を余儀なくされている.またそのほとんどは映像の事後確認にとどまり,犯罪の未然防止や直後の検出には活かされていない.そのため,不特定多数の人物行動を自動認識する技術への期待が高まっている. 本章ではまず混雑映像の解析における問題点を列挙し,その課題を確認する.続いて一般行動認識に関する先行研究を紹介し,関連技術の現状について述べる.そして広域特徴に基づく手法,局所特徴に基づく2つの手法について検討する.前者は,人物領域検出の結果に基づき人物の軌跡からその行動を認識する手法である.後者は,特徴点軌跡に基づき多数の軌跡特徴から人物行動を認識する手法である.2手法の比較を通し,実環境で有効に機能する人物行動認識手法を検討する.最後に,独自の様々な実験による評価と,映像検索に関する国際的評価型ワークショップTRECVID Surveillance Event Detectionタスクへの参加を通し,提案手法の有効性を確認するとともに,実用化へ向けた課題を確認する.\r\n   第4章では,TV視聴者の個人的趣味・嗜好を理解するため,映像視聴中のユーザの筋運動系情動から内部状態(注目度)を推定する手法を検討する.ユーザの内部状態は情動として一部身体に表出すると考えられるが,その動作は微小であり,正確に計測することは難しい.さらに計測した情動動作がそのまま内部状態を表しているとは限らない.たとえば映像コンテンツ内特徴など,外的要因も考慮する必要がある.これらの課題により,可視情報からの内部状態推定は一般に困難とされてきた.本章では,はじめに脳科学や心理学など,他分野研究を含めた人物の内部状態推定に関する先行研究を紹介する.次にユーザが注目状態にあるときに表出する情動動作を検討し,それら動作と注目度に関する仮説を示す.目視正解データによる仮説の検証の後,各情動特徴を自動取得する手法を提案する.最後に,自動計測した情動特徴から注目度を算出する注目度推定器を作成し,その性能を評価するとともに,提案手法の将来性を考察する.\r\n   第5章では,本論文の成果をまとめる.本論文の成果は,実環境における人物動作認識のニーズとその実現へ向けた課題を確認し,各ニーズにおける人物動作認識手法を提案するとともに,その有効性を検証したことにある.さらに意図により動作を分類し,各段階での知見を活用しながらより微小動作の認識手法を提案し,動作者の意図を理解する手段を示したことにある.特に無意図的動作と呼ばれる情動動作から人物の潜在的興味を推定する可能性を示せたことは,大きな成果であると考える.従って本研究は,家庭における新たなマンマシンインタフェースや個人プロファイル推定,監視カメラをはじめとする各種映像における人物行動検出など,実生活環境で利用可能な人物動作理解技術の確立へ向け,大きな貢献をしたと考える.\r\n\r\n\r\nAbstract\r\n\r\nVideos have become in various ways a part of daily life and are now an easily accessible form of media. Many personally owned PCs and TVs are now equipped with a camera, leading to the wide use of videos in communication. The Internet is also teeming with vast amounts of video images that are widely accessible to anybody. In many cities, surveillance cameras are installed in different places to constantly record videos inside and outside buildings. \r\nThis paper demonstrates advances in technologies for recognition of human motion in videos taken in actual environments. With the wide use of videos in daily life situations, there has been an increasing demand for human motion recognition technologies, such as for operating devices through user movements and for doing video searches using specific movements. Videos from actual environments, however, are difficult to analyze reliably because of variations in shooting conditions and object movements. This study thus aimed to look at these issues and establish reliable human motion recognition technologies.\r\nChapter 1 gives a background of the research, mentioning society’s needs for human motion recognition technologies in daily life. The chapter discusses the classification of human motions in an effort to address their diversity. Specific technologies needed for recognizing motions in each category are then discussed, and their feasibility is assessed. The chapter concludes with a discussion of general motion recognition technologies based on video analysis and other related technologies.\r\nChapter 2 focuses on gestures, which serve as a medium of communication, and discusses the methods for recognizing them. Recognizing gestures is important in the light of the demand for a man-machine interface that will replace remote controls in large-screen high-definition TV viewing in the near future. Since touch-type devices are not needed and operations can be made without losing immersive quality, which is a characteristic of next-generation TV, human gesture recognition technologies based on image analysis are getting wide attention as a new form of interface. Gesture-based operations also enable a means for natural interaction with objects within videos. The chapter begins with a discussion of the requirements for gesture recognition in next-generation TV viewing. Next, it introduces previous researches on gesture recognition and outlines the objectives of the research discussed in the chapter. In particular, the research aims to study depth-information usage methods that are necessary for realization of interactive gesture recognition needed in next-generation TV viewing environments. It also aims to study image features that include long-term temporal information to enable a natural recognition of user’s movements. A new gesture recognition method is then proposed from these studies. Likewise, the effectiveness of the proposed gesture recognition method in serving as an interface for next-generation TV viewing environments is ascertained and evaluated through various experiments, and a discussion of its scalability is made.\r\nChapter 3 focuses on general human behavior. The chapter proposes a reliable method for recognizing general behavior using actual videos of people in crowded areas taken through surveillance cameras. Although outdoor and indoor surveillance cameras have become widely used nowadays, image recognition is usually done manually. Since there are only few operators relative to the large amount of video data that need to be processed, surveillance could not be effectively done. In addition, since these videos are not processed immediately, they do not contribute to crime prevention and are not being used for crime detection at time of occurrence. Thus, there is an urgent need for a technology for automatic recognition of large numbers and types of human behavior. This chapter enumerates problems and discusses issues related to the analysis of images of people in crowded areas. It also mentions previous research on recognition of general human behavior and provides an overview of related technologies. It also discusses the two main types of methods for recognition of human behavior, namely, the global-feature-based method and the local-feature-based method. The former is based on detection of human regions, wherein human behavior is recognized on the basis of the trajectory of the human region. The latter is based on feature-point trajectories, wherein behaviors occurring within the video are recognized from multiple trajectory features. A human behavior recognition method that functions effectively in actual situations is proposed from these two methods. The chapter also includes a discussion of issues related to practical application of the proposed method as well as a discussion of its effectiveness as evaluated through various propriety experiments and through participation in the TRECVID Surveillance Event Detection evaluation task, an international evaluation workshop for event detection in video surveillance.\r\nChapter 4 discusses methods for estimating internal state (attentiveness) of the user viewing the video in accordance with image features obtained through measurements of the viewer’s emotions expressed through certain muscular movements. Generally, the user’s internal state is partly expressed through his or her body as emotional behavior. These actions, however, are usually imperceptible and are difficult to measure accurately. In addition, they are not necessarily straightforward expressions of the user’s internal state, making it important to consider external factors such as content features of the video. As such, inferring internal states on the basis of visual information is usually a daunting task and can be considered as a very challenging topic. The chapter begins with an introduction of previous researches on inference of human internal states, including those in the fields of neuroscience and psychology. Next, it discusses emotional behaviors that are observed when users are paying attention and mentions some hypotheses regarding the relationships of these behaviors to user attentiveness. After verification of these hypotheses using correct visual data, a method for automatically obtaining the different emotional features is proposed. Lastly, it introduces the fabrication of an instrument for inferring attentiveness on the basis of automatically collected emotional features and discusses the potential of the proposed method as well as its functionality and performance.\r\nChapter 5 gives a summary of what has been achieved through the study, namely, the identification of the needs pertaining to human motion recognition in actual environments and of the issues towards making such technologies possible, the proposal of methods for human motion recognition that address these different needs, and the verification of the effectiveness of these proposed methods. In addition, the study provided a classification of motion based on human intention and showed methods for inferring intention in accordance with the different types of motion. In particular, showing that it is possible to infer potential human interests through unintentional emotional behavior is considered as a very important achievement of the study. The study, therefore, has made a significant contribution in developing human motion recognition technologies that are useable in actual life environments, such as for creating new man-machine interfaces in the home, for personal profiling, and for detection of human behavior in surveillance videos and other images.","subitem_description_type":"Other"}]},"item_1_description_18":{"attribute_name":"フォーマット","attribute_value_mlt":[{"subitem_description":"application/pdf","subitem_description_type":"Other"}]},"item_1_description_7":{"attribute_name":"学位記番号","attribute_value_mlt":[{"subitem_description":"総研大甲第1516号","subitem_description_type":"Other"}]},"item_1_select_14":{"attribute_name":"所蔵","attribute_value_mlt":[{"subitem_select_item":"有"}]},"item_1_select_8":{"attribute_name":"研究科","attribute_value_mlt":[{"subitem_select_item":"複合科学研究科"}]},"item_1_select_9":{"attribute_name":"専攻","attribute_value_mlt":[{"subitem_select_item":"17 情報学専攻"}]},"item_1_text_10":{"attribute_name":"学位授与年度","attribute_value_mlt":[{"subitem_text_value":"2011"}]},"item_creator":{"attribute_name":"著者","attribute_type":"creator","attribute_value_mlt":[{"creatorNames":[{"creatorName":"TAKAHASHI, Masaki","creatorNameLang":"en"}],"nameIdentifiers":[{}]}]},"item_files":{"attribute_name":"ファイル情報","attribute_type":"file","attribute_value_mlt":[{"accessrole":"open_date","date":[{"dateType":"Available","dateValue":"2016-02-17"}],"displaytype":"simple","filename":"甲1516_要旨.pdf","filesize":[{"value":"366.6 kB"}],"format":"application/pdf","licensetype":"license_11","mimetype":"application/pdf","url":{"label":"要旨・審査要旨","url":"https://ir.soken.ac.jp/record/3143/files/甲1516_要旨.pdf"},"version_id":"c98b442c-e135-4e34-aed0-5177ea4d4e1c"},{"accessrole":"open_date","date":[{"dateType":"Available","dateValue":"2016-02-17"}],"displaytype":"simple","filename":"甲1516_本文.pdf","filesize":[{"value":"6.6 MB"}],"format":"application/pdf","licensetype":"license_11","mimetype":"application/pdf","url":{"label":"本文","url":"https://ir.soken.ac.jp/record/3143/files/甲1516_本文.pdf"},"version_id":"7df6ea1e-5a5e-46d7-871d-d3b3497652a9"}]},"item_language":{"attribute_name":"言語","attribute_value_mlt":[{"subitem_language":"jpn"}]},"item_resource_type":{"attribute_name":"資源タイプ","attribute_value_mlt":[{"resourcetype":"thesis","resourceuri":"http://purl.org/coar/resource_type/c_46ec"}]},"item_title":"映像解析による人物動作理解に関する研究","item_titles":{"attribute_name":"タイトル","attribute_value_mlt":[{"subitem_title":"映像解析による人物動作理解に関する研究"}]},"item_type_id":"1","owner":"21","path":["19"],"pubdate":{"attribute_name":"公開日","attribute_value":"2012-09-14"},"publish_date":"2012-09-14","publish_status":"0","recid":"3143","relation_version_is_last":true,"title":["映像解析による人物動作理解に関する研究"],"weko_creator_id":"21","weko_shared_id":-1},"updated":"2023-06-20T15:37:38.379692+00:00"}