English  |  正體中文  |  简体中文  |  Items with full text/Total items : 26988/38789
Visitors : 2356223      Online Users : 27
RC Version 4.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Adv. Search
LoginUploadHelpAboutAdminister

Please use this identifier to cite or link to this item: http://ntour.ntou.edu.tw:8080/ir/handle/987654321/35763

Title: 使用區塊貼片為基礎之聯想記憶體模型於視訊行為偵測及辨識
Patch-Based Video Action Detection And Recognition Using an Associate Memory Model
Authors: Guan-Yu Chen
陳冠宇
Contributors: NTOU:Department of Computer Science and Engineering
國立臺灣海洋大學:資訊工程學系
Keywords: 行動形狀;一般化霍夫轉換;人體行為偵測;關聯式記憶體
Action shape;Generalized Hough transform;Human action detection;Association memory
Date: 2012
Issue Date: 2013-10-07T02:58:51Z
Abstract: 本篇論文提出了一個新方法是有關於利用關聯式記憶體模型來偵測影片內移動中的物件並同時辨識物件的行為。這個系統在偵測物件前首先會對影片做抽取關鍵畫面的前處理,目的是為了得到較簡潔的影像串列。之後將每張關鍵畫面分割成許多重疊的區塊貼片,並對每張區塊貼片抽取外觀特徵以及移動向量來產生出以外觀特徵為主的視覺碼本(codebook) VI 和以移動向量為主的視覺碼本 VM。這些在VI以及VM內的視覺樣本(codewords)將是之後建造一個基於時間及空間上的物體行為模型的主要特徵。接下來我們提出利用近期發展出的霍福投票模型(Hough voting model)的架構做為機器學習的方法。針對每張關鍵畫面,一般化霍福轉換(Generalized Hough Transform ,GHT)能夠利用視覺樣本來建構出畫面上行動中的物件與霍福空間的對應關係。至於如何辨別哪些貼片區塊是屬於行動物件的一部份,我們是利用關連式記憶體來達成。這裡的關聯式記憶體模型是同時利用各個區塊貼片的外觀特徵以及移動向量特徵來分群之後所組成的。在本篇論文內,我們也同時關注到了霍福投票架構的高運算複雜度的問題。在訓練階段,我們僅先標出了第一張關鍵畫面中需要偵測的物件,之後利用自動程序來建立霍福形狀模型。在實驗階段,我們比對測試影片的區塊貼片與之前學習過的模型後再經由機率性投票架構(probabilistic voting framework)來定位出行動物件的位置並同時辨認他的行為。本文所建立的霍福形狀模型並不只能克服行為物件在影片中包括縮放、旋轉以及位移的變化,同時也可克服行動速度快慢間的變化。本篇論文演算法產生出的霍福圖像包含行為物件的中心以及少量的錯誤。最後經由實驗結果證實本文的做法可以在一些公開的測試資料集內得到較好偵測精準度以及行為分辨率。
This paper presents a novel approach to locate action objects in video and recognize the action types simultaneously using an association memory model. The system uses a preprocessing procedure to extract key-frames from a video sequence and provide a compact representation for this video sequence. Every key-frame is partitioned into multiple overlapping patches in which image and motion features are extracted to generate a visual codebook VI and a motion codebook VM. The codewords in VI and VM are considered as primitive features for building high-level action object models of the space-time sensory information. We then propose the recently-develop Hough voting model as a candidate architecture for human action learning and memory. For each key-frame, the Hough voting framework employs Generalized Hough Transform (GHT) which constructs a graphical structure among key-frame codewords to learn the mapping between action objects and a Hough space. To determine which patches explicitly represent an action object, an association model is applied. The association model learns the combined motion-vision model by clustering the patches into clusters consisting of features both from the spatial and temporal information of the member patches. In this work, we also address the crucial problems of Hough-voting framework including high computational complexity, substantial user interaction, and a small number of training shapes. In the training phase, the system, based on simply labeling the location of a new shape in the first key-frame, uses an automatic procedure to generate a Hough model which is well adapted to all aligned training 2D shapes by incorporating the shape variability of the whole training data set. A probabilistic voting framework to match the learned shape models to image frames of a test video is therefore proposed to locate the target image in each frame and recognize the action category of the video. The generated Hough shape models are not only invariant to geometrical transformations, i.e., scaling, rotation, and translation, but also invariant to temporal scaling. The proposed algorithm results in Hough images with responses at the action centers and fewer false peaks. Results show that the proposed method gives good performance on several publicly available datasets in terms of detection accuracy and recognition rate.
URI: http://ethesys.lib.ntou.edu.tw/cdrfb3/record/#G0019957028
http://ntour.ntou.edu.tw/handle/987654321/35763
Appears in Collections:[資訊工程學系] 博碩士論文

Files in This Item:

File Description SizeFormat
index.html0KbHTML120View/Open


All items in NTOUR are protected by copyright, with all rights reserved.

 


著作權政策宣告: 本網站之內容為國立臺灣海洋大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,請合理使用本網站之內容,以尊重著作權人之權益。
網站維護: 海大圖資處 圖書系統組
DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback