Arama Sonuçları

Listeleniyor 1 - 3 / 3
  • Yayın
    Using uncertainty metrics in adversarial machine learning as an attack and defense tool
    (Işık Ünivresitesi, 2022-12-19) Tuna, Ömer Faruk; Eskil, Mustafa Taner; Işık Üniversitesi, Lisansüstü Eğitim Enstitüsü, Bilgisayar Mühendisliği Doktora Programı
    Deep Neural Network (DNN) models are widely renowned for their resistance to random perturbations. However, researchers have found out that these models are indeed extremely vulnerable to deliberately crafted and seemingly imperceptible perturbations of the input, defined as adversarial samples. Adversarial attacks have the potential to substantially compromise the security of DNN-powered systems and posing high risks especially in the areas where security is a top priority. Numerous studies have been conducted in recent years to defend against these attacks and to develop more robust architectures resistant to adversarial threats. In this thesis study, we leverage the use of various uncertainty metrics obtained from MC-Dropout estimates of the model for developing new attack and defense ideas. On defense side, we propose a new adversarial detection mechanism and an uncertaintybased defense method to increase the robustness of DNN models against adversarial evasion attacks. On the attack side, we use the quantified epistemic uncertainty obtained from the model’s final probability outputs, along with the model’s own loss function, to generate effective adversarial samples. We’ve experimentally evaluated and verified the efficacy of our proposed approaches on standard computer vision datasets.
  • Yayın
    Uncertainty as a Swiss army knife: new adversarial attack and defense ideas based on epistemic uncertainty
    (Springer, 2022-04-02) Tuna, Ömer Faruk; Çatak, Ferhat Özgür; Eskil, Mustafa Taner
    Although state-of-the-art deep neural network models are known to be robust to random perturbations, it was verified that these architectures are indeed quite vulnerable to deliberately crafted perturbations, albeit being quasi-imperceptible. These vulnerabilities make it challenging to deploy deep neural network models in the areas where security is a critical concern. In recent years, many research studies have been conducted to develop new attack methods and come up with new defense techniques that enable more robust and reliable models. In this study, we use the quantified epistemic uncertainty obtained from the model's final probability outputs, along with the model's own loss function, to generate more effective adversarial samples. And we propose a novel defense approach against attacks like Deepfool which result in adversarial samples located near the model's decision boundary. We have verified the effectiveness of our attack method on MNIST (Digit), MNIST (Fashion) and CIFAR-10 datasets. In our experiments, we showed that our proposed uncertainty-based reversal method achieved a worst case success rate of around 95% without compromising clean accuracy.
  • Yayın
    Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples
    (Cornell Univ, 2021-02-13) Tuna, Ömer Faruk; Çatak, Ferhat Özgür; Eskil, Mustafa Taner
    Deep neural network architectures are considered to be robust to random perturbations. Nevertheless, it was shown that they could be severely vulnerable to slight but carefully crafted perturbations of the input, termed as adversarial samples. In recent years, numerous studies have been conducted in this new area called "Adversarial Machine Learning" to devise new adversarial attacks and to defend against these attacks with more robust DNN architectures. However, almost all the research work so far has been concentrated on utilising model loss function to craft adversarial examples or create robust models. This study explores the usage of quantified epistemic uncertainty obtained from Monte-Carlo Dropout Sampling for adversarial attack purposes by which we perturb the input to the areas where the model has not seen before. We proposed new attack ideas based on the epistemic uncertainty of the model. Our results show that our proposed hybrid attack approach increases the attack success rates from 82.59% to 85.40%, 82.86% to 89.92% and 88.06% to 90.03% on MNIST Digit, MNIST Fashion and CIFAR-10 datasets, respectively.