Arama Sonuçları

Listeleniyor 1 - 5 / 5
  • Yayın
    Convolutional neural network (CNN) algorithm based facial emotion recognition (FER) system for FER-2013 dataset
    (IEEE, 2022-11-18) Ezerceli, Özay; Eskil, Mustafa Taner
    Facial expression recognition (FER) is the key to understanding human emotions and feelings. It is an active area of research since human thoughts can be collected, processed, and used in customer satisfaction, politics, and medical domains. Automated FER systems had been developed and have been used to recognize humans’ emotions but it has been a quite challenging problem in machine learning due to the high intra-class variation. The first models were using known methods such as Support Vector Machines (SVM), Bayes classifier, Fuzzy Techniques, Feature Selection, Artificial Neural Networks (ANN) in their models but still, some limitations affect the accuracy critically such as subjectivity, occlusion, pose, low resolution, scale, illumination variation, etc. The ability of CNN boosts FER accuracy. Deep learning algorithms have emerged as the greatest way to produce the best results in FER in recent years. Various datasets were used to train, test, and validate the models. FER2013, CK+, JAFFE and FERG are some of the most popular datasets. To improve the accuracy of FER models, one dataset or a mix of datasets has been employed. Every dataset includes limitations and issues that have an impact on the model that is trained for it. As a solution to this problem, our state-of-the-art model based on deep learning architectures, particularly convolutional neural network architectures (CNN) with supportive techniques has been implemented. The proposed model achieved 93.7% accuracy with the combination of FER2013 and CK+ datasets for FER2013.
  • Yayın
    Unreasonable effectiveness of last hidden layer activations for adversarial robustness
    (Institute of Electrical and Electronics Engineers Inc., 2022) Tuna, Ömer Faruk; Çatak, Ferhat Özgür; Eskil, Mustafa Taner
    In standard Deep Neural Network (DNN) based classifiers, the general convention is to omit the activation function in the last (output) layer and directly apply the softmax function on the logits to get the probability scores of each class. In this type of architectures, the loss value of the classifier against any output class is directly proportional to the difference between the final probability score and the label value of the associated class. Standard White-box adversarial evasion attacks, whether targeted or untargeted, mainly try to exploit the gradient of the model loss function to craft adversarial samples and fool the model. In this study, we show both mathematically and experimentally that using some widely known activation functions in the output layer of the model with high temperature values has the effect of zeroing out the gradients for both targeted and untargeted attack cases, preventing attackers from exploiting the model's loss function to craft adversarial samples. We've experimentally verified the efficacy of our approach on MNIST (Digit), CIFAR10 datasets. Detailed experiments confirmed that our approach substantially improves robustness against gradient-based targeted and untargeted attack threats. And, we showed that the increased non-linearity at the output layer has some ad-ditional benefits against some other attack methods like Deepfool attack.
  • Yayın
    Closeness and uncertainty aware adversarial examples detection in adversarial machine learning
    (Elsevier Ltd, 2022-07) Tuna, Ömer Faruk; Çatak, Ferhat Özgür; Eskil, Mustafa Taner
    While deep learning models are thought to be resistant to random perturbations, it has been demonstrated that these architectures are vulnerable to deliberately crafted perturbations, albeit being quasi-imperceptible. These vulnerabilities make it challenging to deploy Deep Neural Network (DNN) models in security-critical areas. Recently, many research studies have been conducted to develop defense techniques enabling more robust models. In this paper, we target detecting adversarial samples by differentiating them from their clean equivalents. We investigate various metrics for detecting adversarial samples. We first leverage moment-based predictive uncertainty estimates of DNN classifiers derived through Monte-Carlo (MC) Dropout Sampling. We also introduce a new method that operates in the subspace of deep features obtained by the model. We verified the effectiveness of our approach on different datasets. Our experiments show that these approaches complement each other, and combined usage of all metrics yields 99 % ROC-AUC adversarial detection score for well-known attack algorithms.
  • Yayın
    TENET: a new hybrid network architecture for adversarial defense
    (Springer Science and Business Media Deutschland GmbH, 2023-08) Tuna, Ömer Faruk; Çatak, Ferhat Özgür; Eskil, Mustafa Taner
    Deep neural network (DNN) models are widely renowned for their resistance to random perturbations. However, researchers have found out that these models are indeed extremely vulnerable to deliberately crafted and seemingly imperceptible perturbations of the input, referred to as adversarial examples. Adversarial attacks have the potential to substantially compromise the security of DNN-powered systems and posing high risks especially in the areas where security is a top priority. Numerous studies have been conducted in recent years to defend against these attacks and to develop more robust architectures resistant to adversarial threats. In this study, we propose a new architecture and enhance a recently proposed technique by which we can restore adversarial samples back to their original class manifold. We leverage the use of several uncertainty metrics obtained from Monte Carlo dropout (MC Dropout) estimates of the model together with the model’s own loss function and combine them with the use of defensive distillation technique to defend against these attacks. We have experimentally evaluated and verified the efficacy of our approach on MNIST (Digit), MNIST (Fashion) and CIFAR10 datasets. In our experiments, we showed that our proposed method reduces the attack’s success rate lower than 5% without compromising clean accuracy.
  • Yayın
    Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples
    (Springer, 2022-03) Tuna, Ömer Faruk; Çatak, Ferhat Özgür; Eskil, Mustafa Taner
    Deep neural network (DNN) architectures are considered to be robust to random perturbations. Nevertheless, it was shown that they could be severely vulnerable to slight but carefully crafted perturbations of the input, termed as adversarial samples. In recent years, numerous studies have been conducted in this new area called ``Adversarial Machine Learning” to devise new adversarial attacks and to defend against these attacks with more robust DNN architectures. However, most of the current research has concentrated on utilising model loss function to craft adversarial examples or to create robust models. This study explores the usage of quantified epistemic uncertainty obtained from Monte-Carlo Dropout Sampling for adversarial attack purposes by which we perturb the input to the shifted-domain regions where the model has not been trained on. We proposed new attack ideas by exploiting the difficulty of the target model to discriminate between samples drawn from original and shifted versions of the training data distribution by utilizing epistemic uncertainty of the model. Our results show that our proposed hybrid attack approach increases the attack success rates from 82.59% to 85.14%, 82.96% to 90.13% and 89.44% to 91.06% on MNIST Digit, MNIST Fashion and CIFAR-10 datasets, respectively.