English  |  正體中文  |  简体中文  |  Items with full text/Total items : 28611/40652
Visitors : 769480      Online Users : 55
RC Version 4.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Adv. Search
LoginUploadHelpAboutAdminister

Please use this identifier to cite or link to this item: http://ntour.ntou.edu.tw:8080/ir/handle/987654321/50865

Title: GHT-based associative memory learning and its application to Human action detection and classification
Authors: Shyi-Chyi Cheng
Kwang-Yu Cheng
Yi-Ping Phoebe Chen
Contributors: 國立臺灣海洋大學:資訊工程學系
Keywords: Action object shapes
Generalized Hough transform
Associative memory
Hypergraph
Human action detection and recognition
Date: 2013-04
Issue Date: 2018-10-29T02:41:18Z
Publisher: Pattern Recognition
Abstract: Abstract: This paper, investigating the goal of human-level synthetic intelligence, presents a novel approach to learning an associative memory model using Generalized Hough Transform (GHT) [1]. A human action detection and classification system is also constructed to verify the effectiveness of the proposed GHT-based associative memory model. Existing human action classification systems use machine learning architectures and low-level features to characterize a specific human action. However, existing machine learning architectures often lack restructuring capability, which is an important process of forming the conceptual structures in human-level synthetic intelligence. The gap between low-level features and high-level human intelligence also degrades the performance of existing human action recognition algorithms when the spatial–temporal boundaries of action objects are ambiguous. To eliminate the side effect of temporal ambiguity, the proposed system uses a preprocessing procedure to extract key-frames from a video sequence and provide a compact representation for this video. The image and motion features of patches extracted from each key-frame are collected and used to train an appearance–motion codebook. The training procedure, based on the learnt codebook and GHT, constructs a hypergraph for associative memory learning. For each key-frame of a test video clip, the Hough voting framework is also used to detect salient segments, which are further partitioned into multiple patches, by grouping blocks of similar appearance and motions. The features of the detected patches are used to query the associative memory and retrieve missing patches from key-frames to recall the whole action object. These patches are then used to locate the target action object and classify the action type simultaneously using a probabilistic Hough voting scheme. Results show that the proposed method gives good performance on several publicly available datasets in terms of detection accuracy and recognition rate.
Relation: 46(11) pp.3117-3128
URI: http://ntour.ntou.edu.tw:8080/ir/handle/987654321/50865
Appears in Collections:[資訊工程學系] 期刊論文

Files in This Item:

File Description SizeFormat
index.html0KbHTML30View/Open


All items in NTOUR are protected by copyright, with all rights reserved.

 


著作權政策宣告: 本網站之內容為國立臺灣海洋大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,請合理使用本網站之內容,以尊重著作權人之權益。
網站維護: 海大圖資處 圖書系統組
DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback