3 sonuçlar
Arama Sonuçları
Listeleniyor 1 - 3 / 3
Yayın Subset selection for tuning of hyper-parameters in artificial neural networks(IEEE, 2017) Aki, K.K.Emre; Erkoç, Tuğba; Eskil, Mustafa TanerHyper-parameters of a machine learning architecture define its design. Tuning of hyper-parameters is costly and for large data sets outright impractical, whether it is performed manually or algorithmically. In this study we propose a Neocognitron based method for reducing the training set to a fraction, while keeping the dynamics and complexity of the domain. Our approach does not require processing of the entire training set, making it feasible for larger data sets. In our experiments we could successfully reduce the MNIST training data set to less than 2.5% (1,489 images) by processing less than 10% of the 60K images. We showed that the reduced data set can be used for tuning of number of hidden neurons in a multi-layer perceptron.Yayın TurkEmbed: Turkish embedding model on natural language inference & sentence text similarity tasks(Institute of Electrical and Electronics Engineers Inc., 2025) Ezerceli, Özay; Gümüşçekiçci, Gizem; Erkoç, Tuğba; Özenç, BerkeThis paper introduces TurkEmbed, a novel Turkish language embedding model designed to outperform existing models, particularly in Natural Language Inference (NLI) and Semantic Textual Similarity (STS) tasks. Current Turkish embedding models often rely on machine-translated datasets, potentially limiting their accuracy and semantic understanding. TurkEmbed utilizes a combination of diverse datasets and advanced training techniques, including matryoshka representation learning, to achieve more robust and accurate embeddings. This approach enables the model to adapt to various resource-constrained environments, offering faster encoding capabilities. Our evaluation on the Turkish STS-b-TR dataset, using Pearson and Spearman correlation metrics, demonstrates significant improvements in semantic similarity tasks. Furthermore, TurkEmbed surpasses the current state-of-the-art model, Emrecan, on All-NLI-TR and STS-b-TR benchmarks, achieving a 1-4% improvement. TurkEmbed promises to enhance the Turkish NLP ecosystem by providing a more nuanced understanding of language and facilitating advancements in downstream applications.Yayın TurkEmbed4Retrieval: Türkçe için geri getirme görevine özel gömme modeli(Institute of Electrical and Electronics Engineers Inc., 2025-08-15) Ezerceli, Özay; Gümüşçekiçci, Gizem; Erkoç, Tuğba; Özenç, BerkeBu çalışmada, öncelikle Doğal Dil Çıkarımı (DDÇ) ve Anlamsal Metin Benzerliği (AMB) görevleri için geliştirilen TurkEmbed modelinin, MS-Marco-TR veri seti üzerinde ince ayar yapılarak geri getirme görevlerine uygun hale getirilmesini sağlayan TurkEmbed4Retrieval modelini tanıtıyoruz. Model, Matruşka temsili ögrenme ve özel tasarlanmış negatif çiftlerin sıralanması kayıp fonksiyonu gibi ileri seviye egitim teknikleri kullanılarak optimize edilmiştir. Yapılan kapsamlı deneyler, TurkEmbed4Retrieval’ın, geri getirme metriklerinde TurkishcolBERT modelini Scifact-TR veri kümesinde %19–26 oranında geçtiğini göstermektedir. Bu bağlamda, modelimiz, Türkçe bilgi getirme sistemleri için yeni bir çıtaya ulaşmaktadır.












