5 sonuçlar
Arama Sonuçları
Listeleniyor 1 - 5 / 5
Yayın On the inverse point-source problem of the poisson equation(Istanbul University, 2005) Yılmaz, Melek; Şengül, Metin; Geçkinli, MelihIn this work, a basic inverse heat conduction problem of a simple 2-D model with steady state heat source is taken into view. The physical problem is for a square region with uniform thermophysical properties and a point heat source of unit magnitude. To obtain boundary data, temperature probes are placed at the midpoints of the sides of the square domain. The objective of the inverse problem is to estimate the coordinates of the point source with the least amount of data. Initially, the inverse problem is analyzed to determine the main causes that render the problem ill conditioned. As for the solution, among the methods that has been tried so far, the best results are obtained from a backpropagating ANN with four-probe data. When white Gaussian noise is added to the measurements, no catastrophic failure has been observed.Yayın Adaptive convolution kernel for artificial neural networks(Academic Press Inc., 2021-02) Tek, Faik Boray; Çam, İlker; Karlı, DenizMany deep neural networks are built by using stacked convolutional layers of fixed and single size (often 3 × 3) kernels. This paper describes a method for learning the size of convolutional kernels to provide varying size kernels in a single layer. The method utilizes a differentiable, and therefore backpropagation-trainable Gaussian envelope which can grow or shrink in a base grid. Our experiments compared the proposed adaptive layers to ordinary convolution layers in a simple two-layer network, a deeper residual network, and a U-Net architecture. The results in the popular image classification datasets such as MNIST, MNIST-CLUTTERED, CIFAR-10, Fashion, and ‘‘Faces in the Wild’’ showed that the adaptive kernels can provide statistically significant improvements on ordinary convolution kernels. A segmentation experiment in the Oxford-Pets dataset demonstrated that replacing ordinary convolution layers in a U-shaped network with 7 × 7 adaptive layers can improve its learning performance and ability to generalize.Yayın A novel similarity based unsupervised technique for training convolutional filters(IEEE, 2023-05-17) Erkoç, Tuğba; Eskil, Mustata TanerAchieving satisfactory results with Convolutional Neural Networks (CNNs) depends on how effectively the filters are trained. Conventionally, an appropriate number of filters is carefully selected, the filters are initialized with a proper initialization method and trained with backpropagation over several epochs. This training scheme requires a large labeled dataset, which is costly and time-consuming to obtain. In this study, we propose an unsupervised approach that extracts convolutional filters from a given dataset in a self-organized manner by processing the training set only once without using backpropagation training. The proposed method allows for the extraction of filters from a given dataset in the absence of labels. In contrast to previous studies, we no longer need to select the best number of filters and a suitable filter weight initialization scheme. Applying this method to the MNIST, EMNIST-Digits, Kuzushiji-MNIST, and Fashion-MNIST datasets yields high test performances of 99.19%, 99.39%, 95.03%, and 90.11%, respectively, without applying backpropagation training or using any preprocessed and augmented data.Yayın Odaklanan nöron(IEEE, 2017-06-27) Çam, İlker; Tek, Faik BorayGeleneksel yapay sinir ağında topoloji eğitim sırasında değişebilecek esnekliğe sahip değildir. Ağda her bir nöron ve bağımsız bağlantı katsayıları çözüm işlevinin bir parçasıdır. Bu bildiride önerdiğimiz odaklanabilir nöron birbirine bağımlı katsayıların çekildiği bir odaklayıcı işlevden yararlanır. Nöron odak pozisyonu ve açıklığını değiştirerek aktivasyon topladığı nöronları değiştirebilir. Bu özelliği sayesinde esnek ve dinamik bir ağ topolojisi oluşturabilir ve standart geriye yayılım algoritmasıyla eğitilebilir. Yapılan deneylerde odaklanabilir nöronlarla kurulan bir ağ yapısının, tümüyle bağlı yapay sinir ağına göre daha yüksek başarı elde ettiği gözlenmiştir.Yayın Adaptive locally connected recurrent unit (ALCRU)(Springer Science and Business Media Deutschland GmbH, 2025-07-03) Özçelik, Şuayb Talha; Tek, Faik BorayResearch has shown that adaptive locally connected neurons outperform their fully connected (dense) counterparts, motivating this study on the development of the Adaptive Locally Connected Recurrent Unit (ALCRU). ALCRU modifies the Simple Recurrent Neuron Model (SimpleRNN) by incorporating spatial coordinate spaces for input and hidden state vectors, facilitating the learning of parametric local receptive fields. These modifications add four trainable parameters per neuron, resulting in a minor increase in computational complexity. ALCRU is implemented using standard frameworks and trained with back-propagation-based optimizers. We evaluate the performance of ALCRU using diverse benchmark datasets, including IMDb for sentiment analysis, AdditionRNN for sequence modelling, and the Weather dataset for time-series forecasting. Results show that ALCRU achieves accuracy and loss metrics comparable to GRU and LSTM while consistently outperforming SimpleRNN. In particular, experiments with longer sequence lengths on AdditionRNN and increased input dimensions on IMDb highlight ALCRU’s superior scalability and efficiency in processing complex data sequences. In terms of computational efficiency, ALCRU demonstrates a considerable speed advantage over gated models like LSTM and GRU, though it is slower than SimpleRNN. These findings suggest that adaptive local connectivity enhances both the accuracy and efficiency of recurrent neural networks, offering a promising alternative to standard architectures.












