Skip navigation
  • 中文
  • English

DSpace CRIS

  • DSpace logo
  • Home
  • Research Outputs
  • Researchers
  • Organizations
  • Projects
  • Explore by
    • Research Outputs
    • Researchers
    • Organizations
    • Projects
  • Communities & Collections
  • SDGs
  • Sign in
  • 中文
  • English
  1. National Taiwan Ocean University Research Hub
  2. 電機資訊學院
  3. 電機工程學系
Please use this identifier to cite or link to this item: http://scholars.ntou.edu.tw/handle/123456789/25237
Title: Evaluating Feature Fusion Techniques with Deep Learning Models for Coronavirus Disease 2019 Chest X-ray Sensor Image Identification
Authors: Yen, Chih-Ta 
Liao, Jia-Xian
Huang, Yi-Kai
Keywords: COVID-19;convolutional neural network;deep learning;chest X-ray (CXR);contrast-limited adaptive histogram equalization (CLAHE);feature fusion
Issue Date: 2024
Publisher: MYU, SCIENTIFIC PUBLISHING DIVISION
Journal Volume: 36
Journal Issue: 2
Start page/Pages: 683-699
Source: SENSORS AND MATERIALS
Abstract: 
Current diagnostic methods for coronavirus disease 2019 (COVID-19) mainly rely on reverse transcription polymerase chain reaction (RT-PCR). However, RT-PCR is costly and timeconsuming. Therefore, an accurate, rapid, and inexpensive screening method must be developed for the diagnosis of COVID-19. In this study, we combined image processing technologies with deep learning algorithms to enhance the accuracy of COVID-19 identification from chest X-ray (CXR) sensor images. Contrast-limited adaptive histogram equalization (CLAHE) was used to improve the visibility level of unclear images. In addition, we examined whether our image fusion technique can effectively improve the performance of seven deep learning models (MobileNetV2, ResNet50, ResNet152V2, Inception-ResNet-v2, DenseNet121, DenseNet201, and Xception). The proposed feature fusion technique involves merging the features of an original image with those of an image subjected to CLAHE and then using the merged features to retrain, test, and validate deep learning models for identifying COVID-19 in CXR images. To avoid incidences of images not matching reality and to ensure high model stability, no data enhancement was conducted. The results of this study indicate that the proposed image fusion technique can improve the classification evaluation indicators, especially the sensitivity of deep learning models in two-class and three-class sortings. Sensitivity refers to a model's ability to detect an infection correctly. The highest accuracy in this study was achieved when combining Xception with the proposed feature fusion technique. In three-class sorting, the accuracy of this method was 99.74%, with the average accuracy of fivefold cross-validation being 99.19%. In two-class sorting, the accuracy of the aforementioned method was 99.74%, with the average accuracy of fivefold cross-validation being 99.50%. The results showed that the proposed image processing technologies with deep learning algorithms have exceptional generalization.
URI: http://scholars.ntou.edu.tw/handle/123456789/25237
ISSN: 0914-4935
DOI: 10.18494/SAM4685
Appears in Collections:電機工程學系

Show full item record

Page view(s)

73
checked on Jun 30, 2025

Google ScholarTM

Check

Altmetric

Altmetric

Related Items in TAIR


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Explore by
  • Communities & Collections
  • Research Outputs
  • Researchers
  • Organizations
  • Projects
Build with DSpace-CRIS - Extension maintained and optimized by Logo 4SCIENCE Feedback