Using uncertainty metrics in adversarial machine learning as an attack and defense tool
dc.authorid | 0000-0002-6214-6262 | |
dc.contributor.advisor | Eskil, Mustafa Taner | en_US |
dc.contributor.author | Tuna, Ömer Faruk | en_US |
dc.contributor.other | Işık Üniversitesi, Lisansüstü Eğitim Enstitüsü, Bilgisayar Mühendisliği Doktora Programı | en_US |
dc.date.accessioned | 2023-05-12T13:10:43Z | |
dc.date.available | 2023-05-12T13:10:43Z | |
dc.date.issued | 2022-12-19 | |
dc.department | Işık Üniversitesi, Lisansüstü Eğitim Enstitüsü, Bilgisayar Mühendisliği Doktora Programı | en_US |
dc.description | Text in English ; Abstract: English and Turkish | en_US |
dc.description | Includes bibliographical references (leaves 94-102) | en_US |
dc.description | xv, 102 leaves | en_US |
dc.description.abstract | Deep Neural Network (DNN) models are widely renowned for their resistance to random perturbations. However, researchers have found out that these models are indeed extremely vulnerable to deliberately crafted and seemingly imperceptible perturbations of the input, defined as adversarial samples. Adversarial attacks have the potential to substantially compromise the security of DNN-powered systems and posing high risks especially in the areas where security is a top priority. Numerous studies have been conducted in recent years to defend against these attacks and to develop more robust architectures resistant to adversarial threats. In this thesis study, we leverage the use of various uncertainty metrics obtained from MC-Dropout estimates of the model for developing new attack and defense ideas. On defense side, we propose a new adversarial detection mechanism and an uncertaintybased defense method to increase the robustness of DNN models against adversarial evasion attacks. On the attack side, we use the quantified epistemic uncertainty obtained from the model’s final probability outputs, along with the model’s own loss function, to generate effective adversarial samples. We’ve experimentally evaluated and verified the efficacy of our proposed approaches on standard computer vision datasets. | en_US |
dc.description.abstract | Derin Sinir Ağları modelleri, yaygın olarak rastgele bozulmalara karşı dirençleri ile bilinir. Bununla birlikte, araştırmacılar, bu modellerin, karşıt (hasmane) örnekler olarak adlandırılan girdinin kasıtlı olarak hazırlanmış ve görünüşte algılanamaz bozulmalarına karşı gerçekten son derece savunmasız olduğunu keşfettiler. Bu gibi hasmane saldırılar, Derin Sinir Ağları tabanlı yapay zeka sistemlerinin güvenliğini önemli ölçüde tehlikeye atma potansiyeline sahiptir ve özellikle güvenliğin öncelikli olduğu alanlarda yüksek riskler oluşturur. Bu saldırılara karşı savunma yapmak ve hasmane tehditlere karşı daha dayanıklı mimariler geliştirmek için son yıllarda çok sayıda çalışma yapılmıştır. Bu tez çalışmasında, yeni saldırı ve savunma fikirleri geliştirmek için modelin Monte- Carlo Bırakma Örneklemesinden elde edilen çeşitli belirsizlik metriklerinin kullanımından yararlanıyoruz. Savunma tarafında, hasmane saldırılara karşı yapay sinir ağı modellerinin sağlamlığını artırmak için yeni bir tespit mekanizması ve belirsizliğe dayalı savunma yöntemi öneriyoruz. Saldırı tarafında, etkili hasmane örnekler oluşturmak için modelin kendi kayıp fonksiyonu ile birlikte modelin nihai olasılık çıktılarından elde edilen nicelleştirilmiş epistemik belirsizliği kullanıyoruz. Standart bilgisayarlı görü veri kümeleri üzerinde önerilen yaklaşımlarımızın etkinliğini deneysel olarak değerlendirdik ve doğruladık. | en_US |
dc.description.tableofcontents | Vulnerabilities of AI-driven systems | en_US |
dc.description.tableofcontents | Importance of Uncertainty for AI-driven systems | en_US |
dc.description.tableofcontents | Motivation for Using Uncertainty Information | en_US |
dc.description.tableofcontents | Main Contributions of the Thesis Dissertation | en_US |
dc.description.tableofcontents | Organization of the Thesis Dissertation | en_US |
dc.description.tableofcontents | ADVERSARIAL MACHINE LEARNING | en_US |
dc.description.tableofcontents | Adversarial Attacks | en_US |
dc.description.tableofcontents | Formal Definition of Adversarial Sample | en_US |
dc.description.tableofcontents | Distance Metrics | en_US |
dc.description.tableofcontents | Attacker Objective | en_US |
dc.description.tableofcontents | Capability of the Attacker | en_US |
dc.description.tableofcontents | Adversarial Attack Types | en_US |
dc.description.tableofcontents | Fast-Gradient Sign Method | en_US |
dc.description.tableofcontents | Iterative Gradient Sign Method | en_US |
dc.description.tableofcontents | Projected Gradient Descent | en_US |
dc.description.tableofcontents | Jacobian-based Saliency Map Attack (JSMA) | en_US |
dc.description.tableofcontents | Carlini&Wagner Attack | en_US |
dc.description.tableofcontents | Deepfool Attack | en_US |
dc.description.tableofcontents | Hopskipjump Attack | en_US |
dc.description.tableofcontents | Universal Adversarial Attack | en_US |
dc.description.tableofcontents | Adversarial Defense | en_US |
dc.description.tableofcontents | Defensive Distillation | en_US |
dc.description.tableofcontents | Adversarial Training | en_US |
dc.description.tableofcontents | Magnet | en_US |
dc.description.tableofcontents | Detection of Adversarial Samples | en_US |
dc.description.tableofcontents | UNCERTAINTY IN MACHINE LEARNING | en_US |
dc.description.tableofcontents | Types of Uncertainty in Machine Learning | en_US |
dc.description.tableofcontents | Epistemic Uncertainty | en_US |
dc.description.tableofcontents | Aleatoric Uncertainty | en_US |
dc.description.tableofcontents | Scibilic Uncertainty | en_US |
dc.description.tableofcontents | Quantifying Uncertainty in Deep Neural Networks | en_US |
dc.description.tableofcontents | Quantification of Epistemic Uncertainty via MC-Dropout Sampling | en_US |
dc.description.tableofcontents | Quantification of Aleatoric Uncertainty via MC-Dropout Sampling | en_US |
dc.description.tableofcontents | Quantification of Epistemic and Aleatoric Uncertainty via MCDropout Sampling | en_US |
dc.description.tableofcontents | Moment-Based Predictive Uncertainty Quantification | en_US |
dc.description.tableofcontents | ADVERSARIAL SAMPLE DETECTION | en_US |
dc.description.tableofcontents | Uncertainty Quantification | en_US |
dc.description.tableofcontents | Explanatory Research on Uncertainty Quantification Methods | en_US |
dc.description.tableofcontents | Proposed Closeness Metric | en_US |
dc.description.tableofcontents | Explanatory Research on our Closeness Metric | en_US |
dc.description.tableofcontents | Proposed Epistemic Uncertainty Based Attacks | en_US |
dc.description.tableofcontents | Fast Gradient Sign Method (Uncertainty-Based) | en_US |
dc.description.tableofcontents | Basic Iterative Attack (BIM-A Uncertainty-Based) | en_US |
dc.description.tableofcontents | Basic Iterative Attack (BIM-A Hybrid Approach) | en_US |
dc.description.tableofcontents | Basic Iterative Attack (BIM-B Hybrid Approach) | en_US |
dc.description.tableofcontents | Visualizing Gradient Path for Uncertainty-Based Attacks | en_US |
dc.description.tableofcontents | Visualizing Uncertainty Under Different Attack Variants | en_US |
dc.description.tableofcontents | Search For a More Efficient Attack Algorithm | en_US |
dc.description.tableofcontents | Rectified Basic Iterative Attack | en_US |
dc.description.tableofcontents | Attacker’s Capability | en_US |
dc.description.tableofcontents | Intuition Behind Using Uncertainty-based Reversal Process | en_US |
dc.description.tableofcontents | Uncertainty-Based Reversal Operation | en_US |
dc.description.tableofcontents | Enhanced Uncertainty-Based Reversal Operation | en_US |
dc.description.tableofcontents | The Usage of Uncertainty-based Reversal | en_US |
dc.description.tableofcontents | The Effect of Uncertainty-based Reversal | en_US |
dc.description.tableofcontents | Variants of the Enhanced Uncertainty-Based Reversal Operation | en_US |
dc.description.tableofcontents | Hybrid Deployment Options | en_US |
dc.description.tableofcontents | Via Adversarial Training | en_US |
dc.description.tableofcontents | Via Defensive Distillation | en_US |
dc.description.tableofcontents | The Effect on Clean Data Performance | en_US |
dc.identifier.citation | Tuna, Ö. F. (2022). Using uncertainty metrics in adversarial machine learning as an attack and defense tool. İstanbul: Işık Üniversitesi Lisansüstü Eğitim Enstitüsü. | en_US |
dc.identifier.uri | https://hdl.handle.net/11729/5538 | |
dc.institutionauthor | Tuna, Ömer Faruk | en_US |
dc.institutionauthorid | 0000-0002-6214-6262 | |
dc.language.iso | en | en_US |
dc.publisher | Işık Ünivresitesi | en_US |
dc.relation.publicationcategory | Tez | en_US |
dc.rights | info:eu-repo/semantics/openAccess | en_US |
dc.rights | Attribution-NonCommercial-NoDerivs 3.0 United States | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/3.0/us/ | * |
dc.subject | Deep neural networks | en_US |
dc.subject | Adversarial machine learning | en_US |
dc.subject | Uncertainty quantification | en_US |
dc.subject | Monte-Carlo dropout sampling | en_US |
dc.subject | Epistemic uncertainty | en_US |
dc.subject | Aleatoric uncertainty | en_US |
dc.subject | Scibilic uncertainty | en_US |
dc.subject | Derin sinir ağları | en_US |
dc.subject | Karşıt makine öğrenmesi | en_US |
dc.subject | Monte-Carlo bırakma örneklemesi | en_US |
dc.subject | Model belirsizliği | en_US |
dc.subject | Epistemik belirsizlik | en_US |
dc.subject | Rassal belirsizlik | en_US |
dc.subject | Bilinebilir belirsizlik | en_US |
dc.subject.lcc | QC793 .T86 U85 2022 | |
dc.subject.lcsh | Deep neural networks. | en_US |
dc.subject.lcsh | Adversarial machine learning. | en_US |
dc.subject.lcsh | Uncertainty quantification. | en_US |
dc.subject.lcsh | Monte-Carlo dropout sampling. | en_US |
dc.subject.lcsh | Epistemic uncertainty. | en_US |
dc.subject.lcsh | Aleatoric uncertainty. | en_US |
dc.subject.lcsh | Scibilic uncertainty. | en_US |
dc.title | Using uncertainty metrics in adversarial machine learning as an attack and defense tool | en_US |
dc.title.alternative | Belirsizlik metriklerinin hasmane makine öğrenmesinde saldırı ve savunma amaçlı kullanılması | en_US |
dc.type | Doctoral Thesis | en_US |
dspace.entity.type | Publication |
Dosyalar
Orijinal paket
1 - 1 / 1
Yükleniyor...
- İsim:
- Using_uncertainty_metrics_in_adversarial_machine_learning_as_an_attack_and_defense_tool.pdf
- Boyut:
- 15.37 MB
- Biçim:
- Adobe Portable Document Format
- Açıklama:
- DoctoralThesis
Lisans paketi
1 - 1 / 1
Küçük Resim Yok
- İsim:
- license.txt
- Boyut:
- 1.44 KB
- Biçim:
- Item-specific license agreed upon to submission
- Açıklama: