Arama Sonuçları

Listeleniyor 1 - 6 / 6
  • Yayın
    Developing an efficient deep neural network for automatic detection of COVID-19 using chest X-ray images
    (Elsevier B.V., 2021-06) Sheykhivand, Sobhan; Mousavi, Zohreh; Mojtahedi, Sina; Yousefi Rezaii, Tohid; Farzamnia, Ali; Meshgini, Saeed; Saad, Ismail
    The novel coronavirus (COVID-19) could be described as the greatest human challenge of the 21st century. The development and transmission of the disease have increased mortality in all countries. Therefore, a rapid diagnosis of COVID-19 is necessary to treat and control the disease. In this paper, a new method for the automatic identification of pneumonia (including COVID-19) is presented using a proposed deep neural network. In the proposed method, the chest X-ray images are used to separate 2–4 classes in 7 different and functional scenarios according to healthy, viral, bacterial, and COVID-19 classes. In the proposed architecture, Generative Adversarial Networks (GANs) are used together with a fusion of the deep transfer learning and LSTM networks, without involving feature extraction/selection for classification of pneumonia. We have achieved more than 90% accuracy for all scenarios except one and also achieved 99% accuracy for separating COVID-19 from healthy group. We also compared our deep proposed network with other deep transfer learning networks (including Inception-ResNet V2, Inception V4, VGG16 and MobileNet) that have been recently widely used in pneumonia detection studies. The results based on the proposed network were very promising in terms of accuracy, precision, sensitivity, and specificity compared to the other deep transfer learning approaches. Depending on the high performance of the proposed method, it can be used during the treatment of patients.
  • Yayın
    Adaptive convolution kernel for artificial neural networks
    (Academic Press Inc., 2021-02) Tek, Faik Boray; Çam, İlker; Karlı, Deniz
    Many deep neural networks are built by using stacked convolutional layers of fixed and single size (often 3 × 3) kernels. This paper describes a method for learning the size of convolutional kernels to provide varying size kernels in a single layer. The method utilizes a differentiable, and therefore backpropagation-trainable Gaussian envelope which can grow or shrink in a base grid. Our experiments compared the proposed adaptive layers to ordinary convolution layers in a simple two-layer network, a deeper residual network, and a U-Net architecture. The results in the popular image classification datasets such as MNIST, MNIST-CLUTTERED, CIFAR-10, Fashion, and ‘‘Faces in the Wild’’ showed that the adaptive kernels can provide statistically significant improvements on ordinary convolution kernels. A segmentation experiment in the Oxford-Pets dataset demonstrated that replacing ordinary convolution layers in a U-shaped network with 7 × 7 adaptive layers can improve its learning performance and ability to generalize.
  • Yayın
    Closeness and uncertainty aware adversarial examples detection in adversarial machine learning
    (Elsevier Ltd, 2022-07) Tuna, Ömer Faruk; Çatak, Ferhat Özgür; Eskil, Mustafa Taner
    While deep learning models are thought to be resistant to random perturbations, it has been demonstrated that these architectures are vulnerable to deliberately crafted perturbations, albeit being quasi-imperceptible. These vulnerabilities make it challenging to deploy Deep Neural Network (DNN) models in security-critical areas. Recently, many research studies have been conducted to develop defense techniques enabling more robust models. In this paper, we target detecting adversarial samples by differentiating them from their clean equivalents. We investigate various metrics for detecting adversarial samples. We first leverage moment-based predictive uncertainty estimates of DNN classifiers derived through Monte-Carlo (MC) Dropout Sampling. We also introduce a new method that operates in the subspace of deep features obtained by the model. We verified the effectiveness of our approach on different datasets. Our experiments show that these approaches complement each other, and combined usage of all metrics yields 99 % ROC-AUC adversarial detection score for well-known attack algorithms.
  • Yayın
    Recovery of impenetrable rough surface profiles via CNN-based deep learning architecture
    (Taylor and Francis Ltd., 2022-08-18) Aydın, İzde; Budak, Güven; Sefer, Ahmet; Yapar, Ali
    In this paper, a convolutional neural network (CNN)-based deep learning (DL) architecture for the solution of an electromagnetic inverse problem related to imaging of the shape of the perfectly electric conducting (PEC) rough surfaces is addressed. The rough surface is illuminated by a plane wave and scattered field data is obtained synthetically through the numerical solution of surface integral equations. An effective CNN-DL architecture is implemented through the modelling of the rough surface variation in terms of convenient spline type base functions. The algorithm is numerically tested with various scenarios including amplitude only data and shown that it is very effective and useful.
  • Yayın
    TENET: a new hybrid network architecture for adversarial defense
    (Springer Science and Business Media Deutschland GmbH, 2023-08) Tuna, Ömer Faruk; Çatak, Ferhat Özgür; Eskil, Mustafa Taner
    Deep neural network (DNN) models are widely renowned for their resistance to random perturbations. However, researchers have found out that these models are indeed extremely vulnerable to deliberately crafted and seemingly imperceptible perturbations of the input, referred to as adversarial examples. Adversarial attacks have the potential to substantially compromise the security of DNN-powered systems and posing high risks especially in the areas where security is a top priority. Numerous studies have been conducted in recent years to defend against these attacks and to develop more robust architectures resistant to adversarial threats. In this study, we propose a new architecture and enhance a recently proposed technique by which we can restore adversarial samples back to their original class manifold. We leverage the use of several uncertainty metrics obtained from Monte Carlo dropout (MC Dropout) estimates of the model together with the model’s own loss function and combine them with the use of defensive distillation technique to defend against these attacks. We have experimentally evaluated and verified the efficacy of our approach on MNIST (Digit), MNIST (Fashion) and CIFAR10 datasets. In our experiments, we showed that our proposed method reduces the attack’s success rate lower than 5% without compromising clean accuracy.
  • Yayın
    Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples
    (Springer, 2022-03) Tuna, Ömer Faruk; Çatak, Ferhat Özgür; Eskil, Mustafa Taner
    Deep neural network (DNN) architectures are considered to be robust to random perturbations. Nevertheless, it was shown that they could be severely vulnerable to slight but carefully crafted perturbations of the input, termed as adversarial samples. In recent years, numerous studies have been conducted in this new area called ``Adversarial Machine Learning” to devise new adversarial attacks and to defend against these attacks with more robust DNN architectures. However, most of the current research has concentrated on utilising model loss function to craft adversarial examples or to create robust models. This study explores the usage of quantified epistemic uncertainty obtained from Monte-Carlo Dropout Sampling for adversarial attack purposes by which we perturb the input to the shifted-domain regions where the model has not been trained on. We proposed new attack ideas by exploiting the difficulty of the target model to discriminate between samples drawn from original and shifted versions of the training data distribution by utilizing epistemic uncertainty of the model. Our results show that our proposed hybrid attack approach increases the attack success rates from 82.59% to 85.14%, 82.96% to 90.13% and 89.44% to 91.06% on MNIST Digit, MNIST Fashion and CIFAR-10 datasets, respectively.