http://scholars.ntou.edu.tw/handle/123456789/24745
Title: | Video Object Detection by Classification Using String Kernels | Authors: | Wan-Hsuan Yu Chi-Han Chuang Shyi-Chyi Cheng |
Keywords: | video objects;string kernels;dynamic programming;video object modeling;SVM classification | Issue Date: | 2013 | Start page/Pages: | 82 to 87 | Abstract: | Video object detection is one of the most important research problems for video event detection, indexing, and retrieval. For a variety of applications such as video surveillance and event annotation, the spatial-temporal boundaries between video objects are required for annotating visual content with high-level semantics. In this paper, we define spatial-temporal sampling as a unified process of extracting video objects and computing their spatial-temporal boundaries using a learnt video object model. We first provide a learning approach to build a class-specific video object model from a set of training video clips. Then the learnt model is used to locate the video objects with precise spatial-temporal boundaries from a test video clip using graph kernels. A frame sorting process as a preprocessing is also proposed to transform the graph, modeling the shot configuration of a video clip, into a string of shots. Thus, the computation of graph kernels is simplified to be string kernels. The string kernels for support vector machine (SVM) classification are finally adopted to train the SVM classifiers from a set of training samples and detect the video objects in a test video clip by classification. A human action detection and recognition system is finally constructed to verify the performance of the proposed method. Experimental results show that the proposed method gives good performance on several publicly available datasets in terms of detection accuracy and recognition rate. |
URI: | http://scholars.ntou.edu.tw/handle/123456789/24745 | ISBN: | 978-1-61208-265-3 | ISSN: | 2308-4448 |
Appears in Collections: | 資訊工程學系 |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.