Arama Sonuçları

Listeleniyor 1 - 3 / 3
  • Yayın
    Implicit theories and self-efficacy in an introductory programming course
    (Institute of Electrical and Electronics Engineers Inc, 2018-08) Tek, Faik Boray; Benli, Kristin Surpuhi; Deveci, Ezgi
    Contribution: This paper examined student effort and performance in an introductory programming course with respect to student-held implicit theories and self-efficacy. Background: Implicit theories and self-efficacy help in understanding academic success, which must be considered when developing effective learning strategies for programming.Research Questions: Are implicit theories of intelligence and programming, and programming-efficacy, related to each other and to student success in programming? Is it possible to predict student performance in a course using these constructs? Methodology: Two consecutive surveys ({N}=100 and {N}=81) were administered to non-CS engineering students in Işik University, Turkey. Findings: Implicit theories of programming-aptitude and programming-efficacy are interrelated and positively correlated with effort, performance, and previous failures in the course. Although it was not possible to predict student course grade the data confirms that students who believe in improvable programming aptitude have significantly higher programming efficacy, report more effort, and get higher course grades. In addition, failed students tend to associate the failure with fixed programming aptitude; repeating students favor fixed programming aptitude theory and have lower programming-efficacy, which increases the possibility of further failure.
  • Yayın
    Adaptive locally connected recurrent unit (ALCRU)
    (Springer Science and Business Media Deutschland GmbH, 2025-07-03) Özçelik, Şuayb Talha; Tek, Faik Boray
    Research has shown that adaptive locally connected neurons outperform their fully connected (dense) counterparts, motivating this study on the development of the Adaptive Locally Connected Recurrent Unit (ALCRU). ALCRU modifies the Simple Recurrent Neuron Model (SimpleRNN) by incorporating spatial coordinate spaces for input and hidden state vectors, facilitating the learning of parametric local receptive fields. These modifications add four trainable parameters per neuron, resulting in a minor increase in computational complexity. ALCRU is implemented using standard frameworks and trained with back-propagation-based optimizers. We evaluate the performance of ALCRU using diverse benchmark datasets, including IMDb for sentiment analysis, AdditionRNN for sequence modelling, and the Weather dataset for time-series forecasting. Results show that ALCRU achieves accuracy and loss metrics comparable to GRU and LSTM while consistently outperforming SimpleRNN. In particular, experiments with longer sequence lengths on AdditionRNN and increased input dimensions on IMDb highlight ALCRU’s superior scalability and efficiency in processing complex data sequences. In terms of computational efficiency, ALCRU demonstrates a considerable speed advantage over gated models like LSTM and GRU, though it is slower than SimpleRNN. These findings suggest that adaptive local connectivity enhances both the accuracy and efficiency of recurrent neural networks, offering a promising alternative to standard architectures.
  • Yayın
    Text-to-SQL: a methodical review of challenges and models
    (TÜBİTAK, 2024-05-20) Kanburoğlu, Ali Buğra; Tek, Faik Boray
    This survey focuses on Text-to-SQL, automated translation of natural language queries into SQL queries. Initially, we describe the problem and its main challenges. Then, by following the PRISMA systematic review methodology, we survey the existing Text-to-SQL review papers in the literature. We apply the same method to extract proposed Text-to-SQL models and classify them with respect to used evaluation metrics and benchmarks. We highlight the accuracies achieved by various models on Text-to-SQL datasets and discuss execution-guided evaluation strategies. We present insights into model training times and implementations of different models. We also explore the availability of Text-to-SQL datasets in non-English languages. Additionally, we focus on large language model (LLM) based approaches for the Text-to-SQL task, where we examine LLM-based studies in the literature and subsequently evaluate the LLMs on the cross-domain Spider dataset. Finally, we conclude with a discussion of future directions for Text-to-SQL research, identifying potential areas of improvement and advancements in this field.