3 sonuçlar
Arama Sonuçları
Listeleniyor 1 - 3 / 3
Yayın Adaptive convolution kernel for artificial neural networks(Academic Press Inc., 2021-02) Tek, Faik Boray; Çam, İlker; Karlı, DenizMany deep neural networks are built by using stacked convolutional layers of fixed and single size (often 3 × 3) kernels. This paper describes a method for learning the size of convolutional kernels to provide varying size kernels in a single layer. The method utilizes a differentiable, and therefore backpropagation-trainable Gaussian envelope which can grow or shrink in a base grid. Our experiments compared the proposed adaptive layers to ordinary convolution layers in a simple two-layer network, a deeper residual network, and a U-Net architecture. The results in the popular image classification datasets such as MNIST, MNIST-CLUTTERED, CIFAR-10, Fashion, and ‘‘Faces in the Wild’’ showed that the adaptive kernels can provide statistically significant improvements on ordinary convolution kernels. A segmentation experiment in the Oxford-Pets dataset demonstrated that replacing ordinary convolution layers in a U-shaped network with 7 × 7 adaptive layers can improve its learning performance and ability to generalize.Yayın Odaklanan nöron(IEEE, 2017-06-27) Çam, İlker; Tek, Faik BorayGeleneksel yapay sinir ağında topoloji eğitim sırasında değişebilecek esnekliğe sahip değildir. Ağda her bir nöron ve bağımsız bağlantı katsayıları çözüm işlevinin bir parçasıdır. Bu bildiride önerdiğimiz odaklanabilir nöron birbirine bağımlı katsayıların çekildiği bir odaklayıcı işlevden yararlanır. Nöron odak pozisyonu ve açıklığını değiştirerek aktivasyon topladığı nöronları değiştirebilir. Bu özelliği sayesinde esnek ve dinamik bir ağ topolojisi oluşturabilir ve standart geriye yayılım algoritmasıyla eğitilebilir. Yapılan deneylerde odaklanabilir nöronlarla kurulan bir ağ yapısının, tümüyle bağlı yapay sinir ağına göre daha yüksek başarı elde ettiği gözlenmiştir.Yayın Adaptive locally connected recurrent unit (ALCRU)(Springer Science and Business Media Deutschland GmbH, 2025-07-03) Özçelik, Şuayb Talha; Tek, Faik BorayResearch has shown that adaptive locally connected neurons outperform their fully connected (dense) counterparts, motivating this study on the development of the Adaptive Locally Connected Recurrent Unit (ALCRU). ALCRU modifies the Simple Recurrent Neuron Model (SimpleRNN) by incorporating spatial coordinate spaces for input and hidden state vectors, facilitating the learning of parametric local receptive fields. These modifications add four trainable parameters per neuron, resulting in a minor increase in computational complexity. ALCRU is implemented using standard frameworks and trained with back-propagation-based optimizers. We evaluate the performance of ALCRU using diverse benchmark datasets, including IMDb for sentiment analysis, AdditionRNN for sequence modelling, and the Weather dataset for time-series forecasting. Results show that ALCRU achieves accuracy and loss metrics comparable to GRU and LSTM while consistently outperforming SimpleRNN. In particular, experiments with longer sequence lengths on AdditionRNN and increased input dimensions on IMDb highlight ALCRU’s superior scalability and efficiency in processing complex data sequences. In terms of computational efficiency, ALCRU demonstrates a considerable speed advantage over gated models like LSTM and GRU, though it is slower than SimpleRNN. These findings suggest that adaptive local connectivity enhances both the accuracy and efficiency of recurrent neural networks, offering a promising alternative to standard architectures.












