Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples
| dc.authorid | 0000-0002-6214-6262 | |
| dc.authorid | 0000-0002-2434-9966 | |
| dc.authorid | 0000-0003-0298-0690 | |
| dc.contributor.author | Tuna, Ömer Faruk | en_US |
| dc.contributor.author | Çatak, Ferhat Özgür | en_US |
| dc.contributor.author | Eskil, Mustafa Taner | en_US |
| dc.date.accessioned | 2025-10-06T07:37:55Z | |
| dc.date.available | 2025-10-06T07:37:55Z | |
| dc.date.issued | 2021-02-13 | |
| dc.department | Işık Üniversitesi, Mühendislik ve Doğa Bilimleri Fakültesi, Bilgisayar Mühendisliği Bölümü | en_US |
| dc.department | Işık University, Faculty of Engineering and Natural Sciences, Department of Computer Engineering | en_US |
| dc.description.abstract | Deep neural network architectures are considered to be robust to random perturbations. Nevertheless, it was shown that they could be severely vulnerable to slight but carefully crafted perturbations of the input, termed as adversarial samples. In recent years, numerous studies have been conducted in this new area called "Adversarial Machine Learning" to devise new adversarial attacks and to defend against these attacks with more robust DNN architectures. However, almost all the research work so far has been concentrated on utilising model loss function to craft adversarial examples or create robust models. This study explores the usage of quantified epistemic uncertainty obtained from Monte-Carlo Dropout Sampling for adversarial attack purposes by which we perturb the input to the areas where the model has not seen before. We proposed new attack ideas based on the epistemic uncertainty of the model. Our results show that our proposed hybrid attack approach increases the attack success rates from 82.59% to 85.40%, 82.86% to 89.92% and 88.06% to 90.03% on MNIST Digit, MNIST Fashion and CIFAR-10 datasets, respectively. | en_US |
| dc.description.version | Preprint's Version | en_US |
| dc.identifier.citation | Tuna, Ö. F., Çatak, F. Ö. & Eskil, M. T. (2021). Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples. Arxiv, 1-18. doi: https://doi.org/10.48550/arXiv.2102.04150 | en_US |
| dc.identifier.endpage | 18 | |
| dc.identifier.startpage | 1 | |
| dc.identifier.uri | https://hdl.handle.net/11729/6741 | |
| dc.identifier.uri | https://doi.org/10.48550/arXiv.2102.04150 | |
| dc.identifier.wos | PPRN:10686724 | |
| dc.identifier.wosquality | N/A | |
| dc.indekslendigikaynak | Web of Science | en_US |
| dc.indekslendigikaynak | Preprint Citation Index | en_US |
| dc.institutionauthor | Tuna, Ömer Faruk | en_US |
| dc.institutionauthor | Eskil, Mustafa Taner | en_US |
| dc.institutionauthorid | 0000-0002-6214-6262 | |
| dc.institutionauthorid | 0000-0003-0298-0690 | |
| dc.language.iso | en | en_US |
| dc.publisher | Cornell Univ | en_US |
| dc.relation.ispartof | Arxiv | en_US |
| dc.relation.publicationcategory | Ön Baskı - Uluslararası - Kurum Öğretim Elemanı | en_US |
| dc.rights | info:eu-repo/semantics/openAccess | en_US |
| dc.title | Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples | en_US |
| dc.type | Preprint | en_US |
| dspace.entity.type | Publication | en_US |
Dosyalar
Orijinal paket
1 - 1 / 1
Yükleniyor...
- İsim:
- Exploiting_epistemic_uncertainty_of_the_deep_learning_models_to_generate_adversarial_samples.pdf
- Boyut:
- 1.85 MB
- Biçim:
- Adobe Portable Document Format
Lisans paketi
1 - 1 / 1
Küçük Resim Yok
- İsim:
- license.txt
- Boyut:
- 1.17 KB
- Biçim:
- Item-specific license agreed upon to submission
- Açıklama:












