Skip navigation
  • 中文
  • English

DSpace CRIS

  • DSpace logo
  • 首頁
  • 研究成果檢索
  • 研究人員
  • 單位
  • 計畫
  • 分類瀏覽
    • 研究成果檢索
    • 研究人員
    • 單位
    • 計畫
  • 機構典藏
  • SDGs
  • 登入
  • 中文
  • English
  1. National Taiwan Ocean University Research Hub
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://scholars.ntou.edu.tw/handle/123456789/24748
DC 欄位值語言
dc.contributor.authorDa-Wei Kuoen_US
dc.contributor.authorGuan-Yu Chengen_US
dc.contributor.authorShyi-Chyi Chengen_US
dc.contributor.authorSu-Ling Leeen_US
dc.date.accessioned2024-03-15T07:14:11Z-
dc.date.available2024-03-15T07:14:11Z-
dc.date.issued2012-
dc.identifier.urihttp://scholars.ntou.edu.tw/handle/123456789/24748-
dc.description.abstractThis paper presents a novel approach to locate action objects in video and recognize their action types simultaneously using an associative memory model. The system uses a preprocessing procedure to extract key-frames from a video sequence and provide a compact representation for this video. Every training key-frame is partitioned into multiple overlapping patches in which image and motion features are extracted to generate an appearance-motion codebook. The training procedure also constructs a two-directional associative memory based on the learnt codebook to facilitate the system detecting and recognizing video action events using salient fragments, patch groups with common motion vectors. Our approach proposes the recently-developed Hough voting model as a framework for human action learning and memory. For each key-frame, the Hough voting framework employs Generalized Hough Transform (GHT) which constructs a graphical structure based on key-frame codewords to learn the mapping between action objects and a Hough space. To determine which patches explicitly represent an action object, the system detects salient fragments whose member patches are used to infer the associative memory and retrieve matched patches from the Hough model. These model patches are then used to locate the target action object and classify the action type simultaneously using a probabilistic Hough voting scheme. Results show that the proposed method gives good performance on several publicly available datasets in terms of detection accuracy and recognition rate.en_US
dc.language.isoen_USen_US
dc.publisherIEEEen_US
dc.titleDetecting Salient Fragments for Video Human Action Detection and Recognition Using an Associative Memoryen_US
dc.typeconference paperen_US
dc.identifier.doi10.1109/ISCIT.2012.6380844-
item.fulltextno fulltext-
item.openairecristypehttp://purl.org/coar/resource_type/c_5794-
item.grantfulltextnone-
item.openairetypeconference paper-
item.cerifentitytypePublications-
item.languageiso639-1en_US-
顯示於:資訊工程學系
顯示文件簡單紀錄

Page view(s)

25
checked on 2025/6/30

Google ScholarTM

檢查

Altmetric

Altmetric

TAIR相關文章


在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

瀏覽
  • 機構典藏
  • 研究成果檢索
  • 研究人員
  • 單位
  • 計畫
DSpace-CRIS Software Copyright © 2002-  Duraspace   4science - Extension maintained and optimized by NTU Library Logo 4SCIENCE 回饋