http://scholars.ntou.edu.tw/handle/123456789/23867
標題: | Video-based bird posture recognition using dual feature-rates deep fusion convolutional neural network | 作者: | Lin, Chih-Wei Chen, Zhongsheng Lin, Mengxiang |
關鍵字: | Bird behavior recognition;Convolutional neural network;Behavior rate;Information fusion;BEHAVIOR;VISION;COLOR;FORM | 公開日期: | 八月-2022 | 出版社: | ELSEVIER | 卷: | 141 | 來源出版物: | Ecological Indicators | 摘要: | It is necessary to detect changes in birds' behaviors promptly to realize their health and habitat status. Moreover, promptly provide appropriate medical treatment and environmental remediation. Automating bird behavior recognition can solve this problem and assist in breeding and protecting birds. This paper proposes the transposed non-local module based on a time pyramid network to establish a dual feature-rates deep fusion net (DF2Net) for bird behavior recognition. The time pyramid network uses spatial alignment and time pooling operations to extract features with different rates from features of different depths and then fuse features containing the information. On this basis, the transposed non-local (TNL) module uses features with different rates to calculate the relationship matrix of each time slice and spatial position. Then TNL uses the transpose operation to make the original feature take advantage of the corresponding relationship when it multiplies the original feature by the matrix. The module further deeply integrates the relational information of behaviors at different rates to enhance features of different rates, respectively, which can improve the recognition effect of the model on dynamic behaviors. Our study has three contributions: (1) The TNL module takes features with different rates as inputs and uses a transpose mechanism to get two relationships in different directions that match their corresponding input. (2) Dual feature-rates deep fusion net (DF2-Net) transforms a single rate feature into two rate features and iteratively fuses them to obtain information and behavior relationships at various rates. (3) We collect the unique video dataset of birds' behaviors to fill in the blank of the video dataset and support the study of birds' behaviors. The experiments compare the DF2-Net with the well-known video-based recognition model on the self-collected birds' behavior dataset, containing eight behaviors. DF2-Net has the best classification accuracy and achieves 80.87%, 81.35%, 80.70%, and 81.35% in precision, recall, f1-score, and OA metrics. They are 1.81%, 2.43%, 2.20%, and 2.43% higher than the second-best approach (TPN) with 8 frames, respectively. The experimental results show that DF2-Net outperforms state-of-the-art methods in various frames, in which 16 frames are the most suitable for bird behavior recognition. Moreover, we execute the various ablation experiments to demonstrate the efficiency of the TNL module, the optimal location and internal operations of TNL, and the most suitable parameters of DF2-Net. The ablation experiments demonstrate that the TNL module dramatically improves the recognition accuracy of the dynamic behavior and proves the validity and rationality of the TNL module. Therefore, the proposed model is practical and feasible for automatically recognizing bird behavior. |
URI: | http://scholars.ntou.edu.tw/handle/123456789/23867 | ISSN: | 1470160X | DOI: | 10.1016/j.ecolind.2022.109141 |
顯示於: | 電機工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。