http://scholars.ntou.edu.tw/handle/123456789/22394
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chung, Yao-Liang | en_US |
dc.contributor.author | Lin, Chuan-Kai | en_US |
dc.date.accessioned | 2022-10-04T06:12:38Z | - |
dc.date.available | 2022-10-04T06:12:38Z | - |
dc.date.issued | 2020-11-01 | - |
dc.identifier.uri | http://scholars.ntou.edu.tw/handle/123456789/22394 | - |
dc.description.abstract | This study proposed a model for highway accident detection that combines the You Only Look Once v3 (YOLOv3) object detection algorithm and Canny edge detection algorithm. It not only detects whether an accident has occurred in front of a vehicle, but further performs a preliminary classification of the accident to determine its severity. First, this study established a dataset consisting of around 4500 images mainly taken from the angle of view of dashcams from an open-source online platform. The dataset was named the Highway Dashcam Car Accident for Classification System (HDCA-CS) and was developed with the aim of conforming to the setting of this study. The HDCA-CS not only considers weather conditions (rainy days, foggy days, nighttime settings, and other low-visibility conditions), but also various types of accidents, thus increasing the diversity of the dataset. In addition, we proposed two types of accidents-accidents involving damaged cars and accidents involving overturned cars-and developed three different design methods for comparing vehicles involved in accidents involving damaged cars. Canny edge detection algorithm processed single high-resolution images of accidents were also added to compensate for the low volume of accident data, thereby addressing the problem of data imbalance for training purposes. Lastly, the results showed that the proposed model achieved a mean average precision (mAP) of 62.60% when applied to the HDCA-CS testing dataset. When comparing the proposed model with a benchmark model, two abovementioned accident types were combined to allow the proposed model to produce binary classification outputs (i.e., non-occurrence and occurrence of an accident). The HDCA-CS was then applied to the two models, and testing was conducted using single high-resolution images. At 76.42%, the mAP of the proposed model outperformed the benchmark model's 75.18%; and if we were to apply the proposed model to only test scenarios in which an accident has occurred, its performance would be even better relative to the benchmark. Therefore, our findings demonstrate that our proposed model is superior to other existing models. | en_US |
dc.language.iso | English | en_US |
dc.publisher | MDPI | en_US |
dc.relation.ispartof | SYMMETRY-BASEL | en_US |
dc.subject | YOLOv3 | en_US |
dc.subject | Canny | en_US |
dc.subject | object detection | en_US |
dc.subject | accident detection | en_US |
dc.subject | artificial intelligence | en_US |
dc.title | Application of a Model that Combines the YOLOv3 Object Detection Algorithm and Canny Edge Detection Algorithm to Detect Highway Accidents | en_US |
dc.type | journal article | en_US |
dc.identifier.doi | 10.3390/sym12111875 | - |
dc.identifier.isi | WOS:000594305600001 | - |
dc.relation.journalvolume | 12 | en_US |
dc.relation.journalissue | 11 | en_US |
dc.identifier.eissn | 2073-8994 | - |
item.cerifentitytype | Publications | - |
item.openairetype | journal article | - |
item.openairecristype | http://purl.org/coar/resource_type/c_6501 | - |
item.fulltext | no fulltext | - |
item.grantfulltext | none | - |
item.languageiso639-1 | English | - |
crisitem.author.dept | College of Electrical Engineering and Computer Science | - |
crisitem.author.dept | Department of Communications, Navigation and Control Engineering | - |
crisitem.author.dept | National Taiwan Ocean University,NTOU | - |
crisitem.author.orcid | 0000-0001-6512-1127 | - |
crisitem.author.parentorg | National Taiwan Ocean University,NTOU | - |
crisitem.author.parentorg | College of Electrical Engineering and Computer Science | - |
Appears in Collections: | 通訊與導航工程學系 |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.