http://scholars.ntou.edu.tw/handle/123456789/24759
Title: | Content Aware Image Segmentation for Region-Based Object Retrieval | Authors: | Chi-Han Chuang Chin-Chun Chang Shyi-Chyi Cheng |
Issue Date: | 2009 | Publisher: | IEEE | Abstract: | It is desirable and yet remains as a challenge for querying multimedia data by finding an object inside a target image. The effectiveness of region-based representation for content-based image retrieval is extensively studied in the literature. One common weakness of the region-based approaches only in terms of regions’ low-level visual features is that the homogeneous image regions have little correspondence to the semantic objects, thus, the retrieval results are often far from satisfactory. In addition, the performance is also ruled by the consistency of the segmentation result of the region of the target object in the query and target images. Instead of solving these problems independently, in this paper, a region-based object retrieval using the generalized Hough transform (GHT) and content aware image segmentation is proposed. The proposed approach has two phases. First, the learning phase finds and stores the stable parameters for segmenting each database image, and then sorts the database images according to the found segmentation parameters. In the retrieval phase, an incremental image segmentation process based on the stored segmentation parameters is performed to segment a query image into regions for retrieving visual objects inside database images through the GHT with a modified voting scheme for locating the target visual object under the geometry transformation. With the learned parameters for image segmentation, the segmentation results of query and target images are more stable and consistent. Computer simulation results show that the proposed method gives good performance in terms of retrieval accuracy, robustness, and execution speed. |
URI: | http://scholars.ntou.edu.tw/handle/123456789/24759 |
Appears in Collections: | 資訊工程學系 |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.