4 sonuçlar
Arama Sonuçları
Listeleniyor 1 - 4 / 4
Yayın Effective semi-supervised learning strategies for automatic sentence segmentation(Elsevier Science BV, 2018-04-01) Dalva, Doğan; Güz, Ümit; Gürkan, HakanThe primary objective of sentence segmentation process is to determine the sentence boundaries of a stream of words output by the automatic speech recognizers. Statistical methods developed for sentence segmentation requires a significant amount of labeled data which is time-consuming, labor intensive and expensive. In this work, we propose new multi-view semi-supervised learning strategies for sentence boundary classification problem using lexical, prosodic, and morphological information. The aim is to find effective semi-supervised machine learning strategies when only small sets of sentence boundary labeled data are available. We primarily investigate two semi-supervised learning approaches, called self-training and co-training. Different example selection strategies were also used for co-training, namely, agreement, disagreement and self-combined. Furthermore, we propose three-view and committee-based algorithms incorporating with agreement, disagreement and self-combined strategies using three disjoint feature sets. We present comparative results of different learning strategies on the sentence segmentation task. The experimental results show that the sentence segmentation performance can be highly improved using multi-view learning strategies that we proposed since data sets can be represented by three redundantly sufficient and disjoint feature sets. We show that the proposed strategies substantially improve the average baseline F-measure of 67.66% to 75.15% and 64.84% to 66.32% when only a small set of manually labeled data is available for Turkish and English spoken languages, respectively.Yayın Adaptive convolution kernel for artificial neural networks(Academic Press Inc., 2021-02) Tek, Faik Boray; Çam, İlker; Karlı, DenizMany deep neural networks are built by using stacked convolutional layers of fixed and single size (often 3 × 3) kernels. This paper describes a method for learning the size of convolutional kernels to provide varying size kernels in a single layer. The method utilizes a differentiable, and therefore backpropagation-trainable Gaussian envelope which can grow or shrink in a base grid. Our experiments compared the proposed adaptive layers to ordinary convolution layers in a simple two-layer network, a deeper residual network, and a U-Net architecture. The results in the popular image classification datasets such as MNIST, MNIST-CLUTTERED, CIFAR-10, Fashion, and ‘‘Faces in the Wild’’ showed that the adaptive kernels can provide statistically significant improvements on ordinary convolution kernels. A segmentation experiment in the Oxford-Pets dataset demonstrated that replacing ordinary convolution layers in a U-shaped network with 7 × 7 adaptive layers can improve its learning performance and ability to generalize.Yayın Quadratic programming for class ordering in rule induction(Elsevier Science BV, 2015-03-01) Yıldız, Olcay TanerSeparate-and-conquer type rule induction algorithms such as Ripper, solve a K>2 class problem by converting it into a sequence of K - 1 two-class problems. As a usual heuristic, the classes are fed into the algorithm in the order of increasing prior probabilities. Although the heuristic works well in practice, there is much room for improvement. In this paper, we propose a novel approach to improve this heuristic. The approach transforms the ordering search problem into a quadratic optimization problem and uses the solution of the optimization problem to extract the optimal ordering. We compared new Ripper (guided by the ordering found with our approach) with original Ripper (guided by the heuristic ordering) on 27 datasets. Simulation results show that our approach produces rulesets that are significantly better than those produced by the original Ripper.Yayın An adaptive locally connected neuron model: Focusing neuron(Elsevier B.V., 2021-01-02) Tek, Faik BorayThis paper presents a new artificial neuron model capable of learning its receptive field in the topological domain of inputs. The experiments include tests of focusing neuron networks of one or two hidden layers on synthetic and well-known image recognition data sets. The results demonstrated that the focusing neurons can move their receptive fields towards more informative inputs. In the simple two-hidden layer networks, the focusing layers outperformed the dense layers in the classification of the 2D spatial data sets. Moreover, the focusing networks performed better than the dense networks even when 70% of the weights were pruned. The tests on convolutional networks revealed that using focusing layers instead of dense layers for the classification of convolutional features may work better in some data sets.












