Skip navigation
  • 中文
  • English

DSpace CRIS

  • DSpace logo
  • 首頁
  • 研究成果檢索
  • 研究人員
  • 單位
  • 計畫
  • 分類瀏覽
    • 研究成果檢索
    • 研究人員
    • 單位
    • 計畫
  • 機構典藏
  • SDGs
  • 登入
  • 中文
  • English
  1. National Taiwan Ocean University Research Hub
  2. 電機資訊學院
  3. 電機工程學系
請用此 Handle URI 來引用此文件: http://scholars.ntou.edu.tw/handle/123456789/23101
DC 欄位值語言
dc.contributor.authorYen, Chih-Taen_US
dc.contributor.authorLi, Kang-Huaen_US
dc.date.accessioned2022-11-15T00:41:10Z-
dc.date.available2022-11-15T00:41:10Z-
dc.date.issued2022-01-01-
dc.identifier.issn2169-3536-
dc.identifier.urihttp://scholars.ntou.edu.tw/handle/123456789/23101-
dc.description.abstractIn recent years, facial emotion recognition (FER) has been a popular topic in affective computing. However, FER still faces many challenges in automatic recognition for several reasons, including quality control of sample data, extraction of effective features, creation of models, and multi-feature fusion, which have not been thoroughly researched and therefore are still hot topics in computer visualization. In view of the mature development of deep learning, deep learning methods are increasingly being used in FER. However, because deep learning requires a large amount of data to achieve effective training, many studies have employed transfer learning to compensate for this drawback. Nevertheless, there has been no universal approach for transfer learning in FER. Accordingly, this study used the five classic models in FER (i.e., ResNet-50, Xception, EfficientNet-B0, Inception, and DenseNet121) to conduct a series of experiments: data preprocessing, training type, and the applicability of multi-stage pretraining. According to the results, class wight was the optimal technique for data balance. In addition, the freeze + fine-tuning training type can produce higher accuracy, regardless of the size of the dataset. Multi-stage training was also effective. Compared with the model accuracy in previous studies, the accuracy achieved in this study using the proposed transfer learning method was superior for both large and small datasets. Specifically, on AffectNet, the accuracy for the ResNet-50, Xception, EfficientNet-B0, Inception, and DenseNet-121 models increased by 8.37%, 10.45%, 10.45%, 8.55%, and 5.47%, respectively. On FER2013, the accuracy for these models increased by 5.72%, 2%, 10.45%, 5%, and 9%, respectively. These results proved the validity and advantages of the experiments in this study.en_US
dc.language.isoEnglishen_US
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INCen_US
dc.relation.ispartofIEEE ACCESSen_US
dc.subjectTransfer learningen_US
dc.subjectconvolutional neural networken_US
dc.subjectfacial emotion recognition (FER)en_US
dc.subjectaffective computingen_US
dc.subjectpretrained modelsen_US
dc.subjectfine-tuningen_US
dc.subjectFER2013en_US
dc.subjectAffectNeten_US
dc.titleDiscussions of Different Deep Transfer Learning Models for Emotion Recognitionsen_US
dc.typejournal articleen_US
dc.identifier.doi10.1109/ACCESS.2022.3209813-
dc.identifier.isiWOS:000864340300001-
dc.relation.journalvolume10en_US
dc.relation.pages102860-102875en_US
item.languageiso639-1English-
item.openairetypejournal article-
item.openairecristypehttp://purl.org/coar/resource_type/c_6501-
item.grantfulltextnone-
item.cerifentitytypePublications-
item.fulltextno fulltext-
crisitem.author.deptNational Taiwan Ocean University,NTOU-
crisitem.author.deptDepartment of Electrical Engineering-
crisitem.author.deptCollege of Electrical Engineering and Computer Science-
crisitem.author.parentorgCollege of Electrical Engineering and Computer Science-
crisitem.author.parentorgNational Taiwan Ocean University,NTOU-
顯示於:電機工程學系
顯示文件簡單紀錄

Page view(s)

130
checked on 2025/6/30

Google ScholarTM

檢查

Altmetric

Altmetric

TAIR相關文章


在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

瀏覽
  • 機構典藏
  • 研究成果檢索
  • 研究人員
  • 單位
  • 計畫
DSpace-CRIS Software Copyright © 2002-  Duraspace   4science - Extension maintained and optimized by NTU Library Logo 4SCIENCE 回饋