http://scholars.ntou.edu.tw/handle/123456789/17114
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.author | Yeh, Chia-Cheng | en_US |
dc.contributor.author | Chang, Yang-Lang | en_US |
dc.contributor.author | Alkhaleefah, Mohammad | en_US |
dc.contributor.author | Hsu, Pai-Hui | en_US |
dc.contributor.author | Eng, Weiyong | en_US |
dc.contributor.author | Koo, Voon-Chet | en_US |
dc.contributor.author | Huang, Bormin | en_US |
dc.contributor.author | Chang, Lena | en_US |
dc.date.accessioned | 2021-06-10T01:07:25Z | - |
dc.date.available | 2021-06-10T01:07:25Z | - |
dc.date.issued | 2021-01 | - |
dc.identifier.issn | 2072-4292 | - |
dc.identifier.uri | http://scholars.ntou.edu.tw/handle/123456789/17114 | - |
dc.description.abstract | Due to the large data volume, the UAV image stitching and matching suffers from high computational cost. The traditional feature extraction algorithms-such as Scale-Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), and Oriented FAST Rotated BRIEF (ORB)-require heavy computation to extract and describe features in high-resolution UAV images. To overcome this issue, You Only Look Once version 3 (YOLOv3) combined with the traditional feature point matching algorithms is utilized to extract descriptive features from the drone dataset of residential areas for roof detection. Unlike the traditional feature extraction algorithms, YOLOv3 performs the feature extraction solely on the proposed candidate regions instead of the entire image, thus the complexity of the image matching is reduced significantly. Then, all the extracted features are fed into Structural Similarity Index Measure (SSIM) to identify the corresponding roof region pair between consecutive image sequences. In addition, the candidate corresponding roof pair by our architecture serves as the coarse matching region pair and limits the search range of features matching to only the detected roof region. This further improves the feature matching consistency and reduces the chances of wrong feature matching. Analytical results show that the proposed method is 13x faster than the traditional image matching methods with comparable performance. | en_US |
dc.language.iso | en_US | en_US |
dc.publisher | MDPI | en_US |
dc.relation.ispartof | REMOTE SENS-BASEL | en_US |
dc.subject | DEEP | en_US |
dc.title | YOLOv3-Based Matching Approach for Roof Region Detection from Drone Images | en_US |
dc.type | journal article | en_US |
dc.identifier.doi | 10.3390/rs13010127 | - |
dc.identifier.isi | WOS:000606124400001 | - |
dc.relation.journalvolume | 13 | en_US |
dc.relation.journalissue | 1 | en_US |
item.cerifentitytype | Publications | - |
item.openairetype | journal article | - |
item.openairecristype | http://purl.org/coar/resource_type/c_6501 | - |
item.fulltext | no fulltext | - |
item.grantfulltext | none | - |
item.languageiso639-1 | en_US | - |
crisitem.author.dept | College of Electrical Engineering and Computer Science | - |
crisitem.author.dept | Department of Communications, Navigation and Control Engineering | - |
crisitem.author.dept | National Taiwan Ocean University,NTOU | - |
crisitem.author.parentorg | National Taiwan Ocean University,NTOU | - |
crisitem.author.parentorg | College of Electrical Engineering and Computer Science | - |
顯示於: | 通訊與導航工程學系 11 SUSTAINABLE CITIES & COMMUNITIES |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。