http://scholars.ntou.edu.tw/handle/123456789/26301| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Li, Dong-Lin | en_US |
| dc.contributor.author | Lee, Shih-Kai | en_US |
| dc.contributor.author | Tsai, Yu-Chieh | en_US |
| dc.date.accessioned | 2026-03-12T03:20:52Z | - |
| dc.date.available | 2026-03-12T03:20:52Z | - |
| dc.date.issued | 2026/1/1 | - |
| dc.identifier.issn | 2169-3536 | - |
| dc.identifier.uri | http://scholars.ntou.edu.tw/handle/123456789/26301 | - |
| dc.description.abstract | Modern life is generally stressful, leading many individuals to seek emotional support through pet companionship. Dogs remain one of the most popular choices due to their social nature. However, while dogs express their internal states through various behavioral and morphological cues, humans often struggle to accurately and objectively interpret these species-specific signals. Therefore, developing effective tools to decode canine affective states can significantly enhance the bond between humans and their pets. This paper proposes an automated system based on deep learning for the recognition of dog muzzle expressions. The system aims to provide an objective assessment of canine emotional states, thereby fostering a deeper understanding and strengthening the connection between owners and their dogs. The study utilizes a dataset of Shetland Sheepdog categorized into five affective states. The recognition pipeline first employs the YOLOv8 architecture to detect key anatomical regions, specifically the ears, eyes, and muzzle. These localized features are then processed to classify associated emotional cues. A confidence-weighted strategy is implemented to integrate the scores from multiple detected regions, resulting in a final decision for each target image. To enhance the model's robustness, the experiment incorporates data augmentation and transfer learning techniques. The proposed method achieves a high classification accuracy of 90% on the test set. By focusing on localized feature fusion rather than global image analysis, the system demonstrates significant potential in providing more granular and reliable emotion recognition in domestic dogs. | en_US |
| dc.language.iso | English | en_US |
| dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | en_US |
| dc.relation.ispartof | IEEE ACCESS | en_US |
| dc.subject | Dogs | en_US |
| dc.subject | Accuracy | en_US |
| dc.subject | Face recognition | en_US |
| dc.subject | Emotion recognition | en_US |
| dc.subject | Ear | en_US |
| dc.subject | Deep learning | en_US |
| dc.subject | Data augmentation | en_US |
| dc.subject | Anxiety disorders | en_US |
| dc.subject | Training | en_US |
| dc.subject | Nose | en_US |
| dc.subject | Animal emotion recognition | en_US |
| dc.subject | canine expression | en_US |
| dc.subject | confidence weighting | en_US |
| dc.subject | deep learning | en_US |
| dc.subject | dog face recognition | en_US |
| dc.subject | objec | en_US |
| dc.title | Deep Learning-Based Dog Expression Recognition | en_US |
| dc.type | journal article | en_US |
| dc.identifier.doi | 10.1109/ACCESS.2025.3650550 | - |
| dc.identifier.isi | WOS:001663376800048 | - |
| dc.relation.journalvolume | 14 | en_US |
| dc.relation.pages | 9 | en_US |
| item.openairecristype | http://purl.org/coar/resource_type/c_6501 | - |
| item.cerifentitytype | Publications | - |
| item.grantfulltext | none | - |
| item.fulltext | no fulltext | - |
| item.languageiso639-1 | English | - |
| item.openairetype | journal article | - |
| crisitem.author.dept | College of Electrical Engineering and Computer Science | - |
| crisitem.author.dept | Department of Electrical Engineering | - |
| crisitem.author.dept | National Taiwan Ocean University,NTOU | - |
| crisitem.author.orcid | 0000-0003-2618-7718 | - |
| crisitem.author.parentorg | National Taiwan Ocean University,NTOU | - |
| crisitem.author.parentorg | College of Electrical Engineering and Computer Science | - |
| Appears in Collections: | 電機工程學系 | |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.