http://scholars.ntou.edu.tw/handle/123456789/23872
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lin, Chih-Wei | en_US |
dc.contributor.author | Lin, Mengxiang | en_US |
dc.contributor.author | Hong, Yu | en_US |
dc.date.accessioned | 2023-06-20T02:43:12Z | - |
dc.date.available | 2023-06-20T02:43:12Z | - |
dc.date.issued | 2021-12 | - |
dc.identifier.issn | 1999-4907 | - |
dc.identifier.uri | http://scholars.ntou.edu.tw/handle/123456789/23872 | - |
dc.description.abstract | Plant species, structural combination, and spatial distribution in different regions should be adapted to local conditions, and the reasonable arrangement can bring the best ecological effect. Therefore, it is essential to understand the classification and distribution of plant species. This paper proposed an end-to-end network with Enhancing Nested Downsampling features (END-Net) to solve complex and challenging plant species segmentation tasks. There are two meaningful operations in the proposed network: (1) A compact and complete encoder-decoder structure nests in the down-sampling process; it makes each downsampling block obtain the equal feature size of input and output to get more in-depth plant species information. (2) The downsampling process of the encoder-decoder framework adopts a novel pixel-based enhance module. The enhanced module adaptively enhances each pixel's features with the designed learnable variable map, which is as large as the corresponding feature map and has nxn variables; it can capture and enhance each pixel's information flexibly effectively. In the experiments, our END-Net compared with eleven state-of-the-art semantic segmentation architectures on the self-collected dataset, it has the best PA (Pixel Accuracy) score and FWloU (Frequency Weighted Intersection over Union) accuracy and achieves 84.52% and 74.96%, respectively. END-Net is a lightweight model with excellent performance; it is practical in complex vegetation distribution with aerial and optical images. END-Net has the following merits: (1) The proposed enhancing module utilizes the learnable variable map to enhance features of each pixel adaptively. (2) We nest a tiny encoder-decoder module into the downsampling block to obtain the in-depth plant species features with the same scale in- and out-features. (3) We embed the enhancing module into the nested model to enhance and extract distinct plant species features. (4) We construct a specific plant dataset that collects the optical images-based plant picture captured by drone with sixteen species. | en_US |
dc.language.iso | en_US | en_US |
dc.publisher | MDPI | en_US |
dc.relation.ispartof | Forests | en_US |
dc.subject | deep learning | en_US |
dc.subject | plant species | en_US |
dc.subject | semantic segmentation | en_US |
dc.subject | features enhancing | en_US |
dc.subject | SEMANTIC SEGMENTATION | en_US |
dc.subject | CLASSIFICATION | en_US |
dc.subject | IDENTIFICATION | en_US |
dc.subject | VEGETATION | en_US |
dc.title | Aerial and Optical Images-Based Plant Species Segmentation Using Enhancing Nested Downsampling Features | en_US |
dc.type | journal article | en_US |
dc.identifier.doi | 10.3390/f12121695 | - |
dc.identifier.isi | WOS:000739000800001 | - |
dc.relation.journalvolume | 12 | en_US |
dc.relation.journalissue | 12 | en_US |
item.cerifentitytype | Publications | - |
item.openairetype | journal article | - |
item.openairecristype | http://purl.org/coar/resource_type/c_6501 | - |
item.fulltext | no fulltext | - |
item.grantfulltext | none | - |
item.languageiso639-1 | en_US | - |
crisitem.author.dept | National Taiwan Ocean University,NTOU | - |
crisitem.author.dept | College of Electrical Engineering and Computer Science | - |
crisitem.author.dept | Department of Electrical Engineering | - |
crisitem.author.parentorg | National Taiwan Ocean University,NTOU | - |
crisitem.author.parentorg | College of Electrical Engineering and Computer Science | - |
Appears in Collections: | 電機工程學系 |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.