Skip navigation
  • 中文
  • English

DSpace CRIS

  • DSpace logo
  • 首頁
  • 研究成果檢索
  • 研究人員
  • 單位
  • 計畫
  • 分類瀏覽
    • 研究成果檢索
    • 研究人員
    • 單位
    • 計畫
  • 機構典藏
  • SDGs
  • 登入
  • 中文
  • English
  1. National Taiwan Ocean University Research Hub
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://scholars.ntou.edu.tw/handle/123456789/6038
DC 欄位值語言
dc.contributor.authorChi-Han Chuangen_US
dc.contributor.authorShyi-Chyi Chengen_US
dc.contributor.authorChin-Chun Changen_US
dc.contributor.authorYi-Ping Phoebe Chenen_US
dc.date.accessioned2020-11-19T11:56:35Z-
dc.date.available2020-11-19T11:56:35Z-
dc.date.issued2014-07-
dc.identifier.issn1047-3203-
dc.identifier.urihttp://scholars.ntou.edu.tw/handle/123456789/6038-
dc.description.abstractFor a variety of applications such as video surveillance and event annotation, the spatial–temporal boundaries between video objects are required for annotating visual content with high-level semantics. In this paper, we define spatial–temporal sampling as a unified process of extracting video objects and computing their spatial–temporal boundaries using a learnt video object model. We first provide a computational approach for learning an optimal key-object codebook sequence from a set of training video clips to characterize the semantics of the detected video objects. Then, dynamic programming with the learnt codebook sequence is used to locate the video objects with spatial–temporal boundaries in a test video clip. To verify the performance of the proposed method, a human action detection and recognition system is constructed. Experimental results show that the proposed method gives good performance on several publicly available datasets in terms of detection accuracy and recognition rate.en_US
dc.language.isoenen_US
dc.relation.ispartofJournal of Visual Communication and Image Representationen_US
dc.subjectSemantic video objectsen_US
dc.subjectSpatial–temporal samplingen_US
dc.subjectHuman action detectionen_US
dc.subjectVideo object modelen_US
dc.subjectDynamic programmingen_US
dc.subjectMultiple alignmenten_US
dc.subjectModel-based trackingen_US
dc.subjectVideo object detetcionen_US
dc.titleModel-based approach to spatial-temporal sampling of video clips for video object detection by classificationen_US
dc.typejournal articleen_US
dc.identifier.doi10.1016/j.jvcir.2014.02.014-
dc.identifier.isiWOS:000336891200029-
dc.relation.journalvolume25en_US
dc.relation.journalissue5en_US
dc.relation.pages1018-1030en_US
item.openairecristypehttp://purl.org/coar/resource_type/c_6501-
item.cerifentitytypePublications-
item.languageiso639-1en-
item.fulltextno fulltext-
item.grantfulltextnone-
item.openairetypejournal article-
crisitem.author.deptCollege of Electrical Engineering and Computer Science-
crisitem.author.deptDepartment of Computer Science and Engineering-
crisitem.author.deptNational Taiwan Ocean University,NTOU-
crisitem.author.deptCollege of Electrical Engineering and Computer Science-
crisitem.author.deptDepartment of Computer Science and Engineering-
crisitem.author.deptNational Taiwan Ocean University,NTOU-
crisitem.author.parentorgNational Taiwan Ocean University,NTOU-
crisitem.author.parentorgCollege of Electrical Engineering and Computer Science-
crisitem.author.parentorgNational Taiwan Ocean University,NTOU-
crisitem.author.parentorgCollege of Electrical Engineering and Computer Science-
顯示於:資訊工程學系
顯示文件簡單紀錄

WEB OF SCIENCETM
Citations

10
上周
0
上個月
0
checked on 2023/6/27

Page view(s)

217
上周
0
上個月
0
checked on 2025/6/30

Google ScholarTM

檢查

Altmetric

Altmetric

TAIR相關文章


在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

瀏覽
  • 機構典藏
  • 研究成果檢索
  • 研究人員
  • 單位
  • 計畫
DSpace-CRIS Software Copyright © 2002-  Duraspace   4science - Extension maintained and optimized by NTU Library Logo 4SCIENCE 回饋