http://scholars.ntou.edu.tw/handle/123456789/6038
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.author | Chi-Han Chuang | en_US |
dc.contributor.author | Shyi-Chyi Cheng | en_US |
dc.contributor.author | Chin-Chun Chang | en_US |
dc.contributor.author | Yi-Ping Phoebe Chen | en_US |
dc.date.accessioned | 2020-11-19T11:56:35Z | - |
dc.date.available | 2020-11-19T11:56:35Z | - |
dc.date.issued | 2014-07 | - |
dc.identifier.issn | 1047-3203 | - |
dc.identifier.uri | http://scholars.ntou.edu.tw/handle/123456789/6038 | - |
dc.description.abstract | For a variety of applications such as video surveillance and event annotation, the spatial–temporal boundaries between video objects are required for annotating visual content with high-level semantics. In this paper, we define spatial–temporal sampling as a unified process of extracting video objects and computing their spatial–temporal boundaries using a learnt video object model. We first provide a computational approach for learning an optimal key-object codebook sequence from a set of training video clips to characterize the semantics of the detected video objects. Then, dynamic programming with the learnt codebook sequence is used to locate the video objects with spatial–temporal boundaries in a test video clip. To verify the performance of the proposed method, a human action detection and recognition system is constructed. Experimental results show that the proposed method gives good performance on several publicly available datasets in terms of detection accuracy and recognition rate. | en_US |
dc.language.iso | en | en_US |
dc.relation.ispartof | Journal of Visual Communication and Image Representation | en_US |
dc.subject | Semantic video objects | en_US |
dc.subject | Spatial–temporal sampling | en_US |
dc.subject | Human action detection | en_US |
dc.subject | Video object model | en_US |
dc.subject | Dynamic programming | en_US |
dc.subject | Multiple alignment | en_US |
dc.subject | Model-based tracking | en_US |
dc.subject | Video object detetcion | en_US |
dc.title | Model-based approach to spatial-temporal sampling of video clips for video object detection by classification | en_US |
dc.type | journal article | en_US |
dc.identifier.doi | 10.1016/j.jvcir.2014.02.014 | - |
dc.identifier.isi | WOS:000336891200029 | - |
dc.relation.journalvolume | 25 | en_US |
dc.relation.journalissue | 5 | en_US |
dc.relation.pages | 1018-1030 | en_US |
item.cerifentitytype | Publications | - |
item.openairetype | journal article | - |
item.openairecristype | http://purl.org/coar/resource_type/c_6501 | - |
item.fulltext | no fulltext | - |
item.grantfulltext | none | - |
item.languageiso639-1 | en | - |
crisitem.author.dept | College of Electrical Engineering and Computer Science | - |
crisitem.author.dept | Department of Computer Science and Engineering | - |
crisitem.author.dept | National Taiwan Ocean University,NTOU | - |
crisitem.author.dept | College of Electrical Engineering and Computer Science | - |
crisitem.author.dept | Department of Computer Science and Engineering | - |
crisitem.author.dept | National Taiwan Ocean University,NTOU | - |
crisitem.author.parentorg | National Taiwan Ocean University,NTOU | - |
crisitem.author.parentorg | College of Electrical Engineering and Computer Science | - |
crisitem.author.parentorg | National Taiwan Ocean University,NTOU | - |
crisitem.author.parentorg | College of Electrical Engineering and Computer Science | - |
顯示於: | 資訊工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。