Gelişmiş Arama

Basit öğe kaydını göster

dc.contributor.authorGürkan, Mustafa Kağan
dc.contributor.authorArıca, Nafiz
dc.contributor.authorYarman Vural, Fatos T.
dc.date.accessioned2025-03-18T09:02:48Z
dc.date.available2025-03-18T09:02:48Z
dc.date.issued2025en_US
dc.identifier.citationGurkan, M. K., Arica, N., & Yarman Vural, F. T. (2025). A concept-aware explainability method for convolutional neural networks. Machine Vision and Applications, 36(2), 1-17.en_US
dc.identifier.issn0932-8092
dc.identifier.urihttps://hdl.handle.net/20.500.12960/1724
dc.description.abstractAlthough Convolutional Neural Networks (CNN) outperform the classical models in a wide range of Machine Vision applications, their restricted interpretability and their lack of comprehensibility in reasoning, generate many problems such as security, reliability, and safety. Consequently, there is a growing need for research to improve explainability and address their limitations. In this paper, we propose a concept-based method, called Concept-Aware Explainability (CAE) to provide a verbal explanation for the predictions of pre-trained CNN models. A new measure, called detection score mean, is introduced to quantify the relationship between the filters of the model and a set of pre-defined concepts. Based on the detection score mean values, we define sorted lists of Concept-Aware Filters (CAF) and Filter-Activating Concepts (FAC). These lists are used to generate explainability reports, where we can explain, analyze, and compare models in terms of the concepts embedded in the image. The proposed explainability method is compared to the state-of-the-art methods to explain Resnet18 and VGG16 models, pre-trained on ImageNet and Places365-Standard datasets. Two popular metrics, namely, the number of unique detectors and the number of detecting filters, are used to make a quantitative comparison. Superior performances are observed for the suggested CAE, when compared to Network Dissection (NetDis) (Bau et al., in: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2017), Net2Vec (Fong and Vedaldi, in: Paper presented at IEEE conference on computer vision and pattern recognition (CVPR), 2018), and CLIP-Dissect (CLIP-Dis) (Oikarinen and Weng, in: The 11th international conference on learning representations (ICLR), 2023) methods.en_US
dc.language.isoengen_US
dc.publisherSpringer Science and Business Media Deutschland GmbHen_US
dc.relation.ispartofMachine Vision and Applicationsen_US
dc.relation.isversionof10.1007/s00138-024-01653-wen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectConcept-based explanationen_US
dc.subjectConvolutional neural networksen_US
dc.subjectFilter-concept associationen_US
dc.subjectModel comparison via explanationsen_US
dc.titleA concept-aware explainability method for convolutional neural networksen_US
dc.typearticleen_US
dc.authorid0000-0002-3810-5866en_US
dc.departmentMühendislik Fakültesi, Bilişim Sistemleri Mühendisliğien_US
dc.contributor.institutionauthorArıca, Nafiz
dc.identifier.volume36en_US
dc.identifier.issue2en_US
dc.identifier.startpage1en_US
dc.identifier.endpage17en_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US


Bu öğenin dosyaları:

Thumbnail

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster