Skip navigation
  • 中文
  • English

DSpace CRIS

  • DSpace logo
  • 首頁
  • 研究成果檢索
  • 研究人員
  • 單位
  • 計畫
  • 分類瀏覽
    • 研究成果檢索
    • 研究人員
    • 單位
    • 計畫
  • 機構典藏
  • SDGs
  • 登入
  • 中文
  • English
  1. National Taiwan Ocean University Research Hub
  2. 電機資訊學院
  3. 電機工程學系
請用此 Handle URI 來引用此文件: http://scholars.ntou.edu.tw/handle/123456789/16953
DC 欄位值語言
dc.contributor.authorRen-Jie Huangen_US
dc.contributor.authorJung-Hua Wangen_US
dc.contributor.authorChun-Shun Tsengen_US
dc.date.accessioned2021-06-03T04:47:47Z-
dc.date.available2021-06-03T04:47:47Z-
dc.date.issued2017-05-01-
dc.identifier.urihttp://scholars.ntou.edu.tw/handle/123456789/16953-
dc.description.abstractThis paper presents a novel approach for preserving perceptual edges representing boundaries of objects as perceived by human eyes. First, a subset of pixels (pixels of interest, POI) in an input image is selected by a pre-process of removing background and noise. One by one as a target pixel, each of POI is subjected to a Bayesian decision. This approach is characterized by iteratively employing a shape-variable mask to sample gradient orientations of pixels for measuring the directivity of a target pixel, the mask shape is updated after each iteration. We show that a converged mask covers pixels that best fit the orientation similarity with the target pixel, which in effect fulfills the similarity and proximity principles in Gestalt theory. Subsequently, a Bayesian rule is applied to the converged directivity to determine whether the target pixel belongs to a perceptual edge. Instead of using state-of-the-art edge detectors such as Canny detector [1], a pre-process combining Gaussian Mixture Model (GMM) [2] and Difference of Gaussian (DoG) [3] is devised to select POI, wherein GMM is responsible for removing the background of an input image (first screening), whereas DoG for filtering noisy or false contours (second screening). Experimental results indicate that a great amount of computational load can be saved, in comparison with the use of Canny detector in our previous work [4]. Since the perceptual edges are useful for forming a complete object contour corresponding to the human visual perception, the results of this paper can be potentially cooperated with other more advanced object detection methods such as the deep learning-based SSD [5] to achieve the same effect as the human visual system in dealing with obscured or corrupted input images, whereby even if a target object is occluded by other objects or corrupted by rainy water, it can be still identified correctly, this feature should greatly enhance the operational safety of unmanned vehicles, unmanned aircraft and other autonomous systems.en_US
dc.language.isoenen_US
dc.publisherHorizon Researchen_US
dc.relation.ispartofComputer Science and Information Technologyen_US
dc.subjectmasken_US
dc.subjectinput imageen_US
dc.subjectRemoving the Backgrounden_US
dc.subjectPerceptual Edgesen_US
dc.subjectPre Processen_US
dc.titleBayesian Approach to Perceptual Edge Preservation in Computer Visionen_US
dc.typejournal articleen_US
dc.identifier.doidoi:10.13189/csit.2017.050304-
dc.relation.journalvolume5en_US
dc.relation.pages113-119en_US
item.openairecristypehttp://purl.org/coar/resource_type/c_6501-
item.cerifentitytypePublications-
item.languageiso639-1en-
item.fulltextno fulltext-
item.grantfulltextnone-
item.openairetypejournal article-
crisitem.author.deptCollege of Electrical Engineering and Computer Science-
crisitem.author.deptDepartment of Electrical Engineering-
crisitem.author.deptNational Taiwan Ocean University,NTOU-
crisitem.author.parentorgNational Taiwan Ocean University,NTOU-
crisitem.author.parentorgCollege of Electrical Engineering and Computer Science-
顯示於:電機工程學系
顯示文件簡單紀錄

Page view(s)

164
上周
1
上個月
0
checked on 2025/6/30

Google ScholarTM

檢查

Altmetric

Altmetric

TAIR相關文章


在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

瀏覽
  • 機構典藏
  • 研究成果檢索
  • 研究人員
  • 單位
  • 計畫
DSpace-CRIS Software Copyright © 2002-  Duraspace   4science - Extension maintained and optimized by NTU Library Logo 4SCIENCE 回饋