Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples

dc.authorid0000-0002-6214-6262
dc.authorid0000-0002-2434-9966
dc.authorid0000-0003-0298-0690
dc.contributor.authorTuna, Ömer Faruken_US
dc.contributor.authorÇatak, Ferhat Özgüren_US
dc.contributor.authorEskil, Mustafa Taneren_US
dc.date.accessioned2025-10-06T07:37:55Z
dc.date.available2025-10-06T07:37:55Z
dc.date.issued2021-02-13
dc.departmentIşık Üniversitesi, Mühendislik ve Doğa Bilimleri Fakültesi, Bilgisayar Mühendisliği Bölümüen_US
dc.departmentIşık University, Faculty of Engineering and Natural Sciences, Department of Computer Engineeringen_US
dc.description.abstractDeep neural network architectures are considered to be robust to random perturbations. Nevertheless, it was shown that they could be severely vulnerable to slight but carefully crafted perturbations of the input, termed as adversarial samples. In recent years, numerous studies have been conducted in this new area called "Adversarial Machine Learning" to devise new adversarial attacks and to defend against these attacks with more robust DNN architectures. However, almost all the research work so far has been concentrated on utilising model loss function to craft adversarial examples or create robust models. This study explores the usage of quantified epistemic uncertainty obtained from Monte-Carlo Dropout Sampling for adversarial attack purposes by which we perturb the input to the areas where the model has not seen before. We proposed new attack ideas based on the epistemic uncertainty of the model. Our results show that our proposed hybrid attack approach increases the attack success rates from 82.59% to 85.40%, 82.86% to 89.92% and 88.06% to 90.03% on MNIST Digit, MNIST Fashion and CIFAR-10 datasets, respectively.en_US
dc.description.versionPreprint's Versionen_US
dc.identifier.citationTuna, Ö. F., Çatak, F. Ö. & Eskil, M. T. (2021). Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples. Arxiv, 1-18. doi: https://doi.org/10.48550/arXiv.2102.04150en_US
dc.identifier.endpage18
dc.identifier.startpage1
dc.identifier.urihttps://hdl.handle.net/11729/6741
dc.identifier.urihttps://doi.org/10.48550/arXiv.2102.04150
dc.identifier.wosPPRN:10686724
dc.identifier.wosqualityN/A
dc.indekslendigikaynakWeb of Scienceen_US
dc.indekslendigikaynakPreprint Citation Indexen_US
dc.institutionauthorTuna, Ömer Faruken_US
dc.institutionauthorEskil, Mustafa Taneren_US
dc.institutionauthorid0000-0002-6214-6262
dc.institutionauthorid0000-0003-0298-0690
dc.language.isoenen_US
dc.publisherCornell Univen_US
dc.relation.ispartofArxiven_US
dc.relation.publicationcategoryÖn Baskı - Uluslararası - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.titleExploiting epistemic uncertainty of the deep learning models to generate adversarial samplesen_US
dc.typePreprinten_US
dspace.entity.typePublicationen_US

Dosyalar

Orijinal paket
Listeleniyor 1 - 1 / 1
Yükleniyor...
Küçük Resim
İsim:
Exploiting_epistemic_uncertainty_of_the_deep_learning_models_to_generate_adversarial_samples.pdf
Boyut:
1.85 MB
Biçim:
Adobe Portable Document Format
Lisans paketi
Listeleniyor 1 - 1 / 1
Küçük Resim Yok
İsim:
license.txt
Boyut:
1.17 KB
Biçim:
Item-specific license agreed upon to submission
Açıklama: