Makale Koleksiyonu | Bilgisayar Mühendisliği Bölümü
Bu koleksiyon için kalıcı URI
Güncel Gönderiler
Yayın A multi-criteria evaluation of cybersecurity incident management frameworks: integrating AHP, CMMI and SWOT(Karyay Karadeniz Yayımcılık Ve Organizasyon Ticaret Limited Şirketi, 2026-01-15) Ağar, Hasan Çağlar; Çeliktaş, BarışWith the growing complexity and frequency of cybersecurity incidents, the selection of an appropriate incident management framework has emerged as a strategic imperative and a nontrivial decision-making problem for organizations operating across diverse sectors. This study presents a multi-dimensional evaluation of four globally recognized frameworks and standards—ISO 27035, NIST 800-61, ITIL v4, and PCI DSS—to determine their effectiveness across 10 rigorously selected key performance parameters. The initial stage of the study involved the identification of 20 preliminary parameters through expert input and literature synthesis. These were then evaluated by 70 cybersecurity professionals using a hybrid decision-making model combining Likert scale scoring, standard deviation filtering, CV score, Z-score normalization and the Analytic Hierarchy Process (AHP) for pairwise comparisons. The top 10 key parameters were derived based on calculated priority weights. To assess each framework, we applied the Capability Maturity Model Integration (CMMI) and visualized results via radar charts and heatmaps, offering comparative insights into operational maturity. Additionally, SWOT analysis was conducted to examine strategic positioning and identify opportunities for improvement. The outcomes not only provide a practical benchmarking guide for practitioners but also introduce a replicable, evidence-based methodology for academic and industry adoption. This work offers a novel and structured lens to evaluate incident management maturity, addressing the pressing need for strategic alignment, automation integration, and adaptive resilience in cybersecurity operations.Yayın Hierarchical secure key assignment scheme(Public Library of Science, 2026-02-18) Çeliktaş, Barış; Çelikbilek, İbrahim; Güzey, Süeda; Özdemir, EnverThis work presents a novel hierarchical key assignment mechanism for access control, designed to be computationally lightweight and optimized for digital environments with structured access policies. By leveraging orthogonal projection and distributing a basis to each group, it enables flexible and efficient left-to-right and top-down access structures. The scheme ensures that parent groups can derive the secret keys of their child groups while preventing unauthorized reverse access. It is resilient against collusion attacks and privilege escalation, offering robust key recovery and indistinguishability properties. Moreover, it guarantees strong key indistinguishability under adversarial models and facilitates a secure rekeying process without reliance on a trusted third party. To demonstrate practical efficiency, we provide a full analytical complexity evaluation showing that key derivation requires at most ∂(n2i ) operations, where ni is the dimension of the assigned subspace. For typical deployment parameters used in the experiments, the total key material per user remains compact (≈ 3,072 bits), significantly smaller than well-known post-quantum schemes such as Dilithium-5 (38,912 bits). The storage requirement scales linearly with the number of groups (ck+1 bases for c groups with at most k members), ensuring that even large hierarchies remain lightweight. Our evaluation further shows that selective rekeying affects only the descendants of the modified group, resulting in communication overhead of ∂(m′λ) bits, where m′ is the number of affected users and λ is the key length. These results collectively highlight the scheme’s scalability, low storage footprint, and suitability for large access hierarchies.Yayın Edge detection using artificial bee colony algorithm (ABC)(IACSIT, 2013-11-21) Yiğitbaşı, Elif Deniz; Akhan Baykan, NurdanEdge detection methods in the field of image processing are an important application area. Currently, image processing is being exploited in many areas. For this reason, methods used in developing more and more every day and studies which is about computer vision systems are being developed for less errors. Optimization algorithms have been used for better results in so many studies. In this paper, Artificial Bee Colony (ABC) Optimization Algorithm is used for edge detection which is about gray scale images. First, ABC algorithm is explained. Following, edge detection and edge detection with ABC algorithm are clarified. Finally, results are showed. Results show that the proposed method can be applied for edge detection operations.Yayın Evaluating the efficiency of latent spaces via the coupling-matrix(Cornell Univ, 2025-09-08) Yavuz, Mehmet Can; Yanıkoğlu, BerrinA central challenge in representation learning is constructing latent embeddings that are both expressive and efficient. In practice, deep networks often produce redundant latent spaces where multiple coordinates encode overlapping information, reducing effective capacity and hindering generalization. Standard metrics such as accuracy or reconstruction loss provide only indirect evidence of such redundancy and cannot isolate it as a failure mode. We introduce a redundancy index, denoted ρ(C), that directly quantifies inter-dimensional dependencies by analyzing coupling matrices derived from latent representations and comparing their off-diagonal statistics against a normal distribution via energy distance. The result is a compact, interpretable, and statistically grounded measure of representational quality. We validate ρ(C) across discriminative and generative settings on MNIST variants, Fashion-MNIST, CIFAR-10, and CIFAR-100, spanning multiple architectures and hyperparameter optimization strategies. Empirically, low ρ(C) reliably predicts high classification accuracy or low reconstruction error, while elevated redundancy is associated with performance collapse. Estimator reliability grows with latent dimension, yielding natural lower bounds for reliable analysis. We further show that Treestructured Parzen Estimators (TPE) preferentially explore lowρ regions, suggesting that ρ(C) can guide neural architecture search and serve as a redundancy-aware regularization target. By exposing redundancy as a universal bottleneck across models and tasks, ρ(C) offers both a theoretical lens and a practical tool for evaluating and improving the efficiency of learned representations.Yayın Geopolitical parallax: beyond Walter Lippmann just after large language models(Cornell Univ, 2025-08-27) Yavuz, Mehmet Can; Kabir, Humza Gohar; Özkan, AylinObjectivity in journalism has long been contested, oscillating between ideals of neutral, fact-based reporting and the inevitability of subjective framing. With the advent of large language models (LLMs), these tensions are now mediated by algorithmic systems whose training data and design choices may themselves embed cultural or ideological biases. This study investigates geopolitical parallax—systematic divergence in news quality and subjectivity assessments—by comparing articlelevel embeddings from Chinese-origin (Qwen, BGE, Jina) and Western-origin (Snowflake, Granite) model families. We evaluate both on a human-annotated news quality benchmark spanning fifteen stylistic, informational, and affective dimensions, and on parallel corpora covering politically sensitive topics, including Palestine and reciprocal China–United States coverage. Using logistic regression probes and matched-topic evaluation, we quantify per-metric differences in predicted positive-class probabilities between model families. Our findings reveal consistent, nonrandom divergences aligned with model origin. In Palestinerelated coverage, Western models assign higher subjectivity and positive emotion scores, while Chinese models emphasize novelty and descriptiveness. Cross-topic analysis shows asymmetries in structural quality metrics—Chinese-on-US scoring notably lower in fluency, conciseness, technicality, and overall quality—contrasted by higher negative emotion scores. These patterns align with media bias theory and our distinction between semantic, emotional, and relational subjectivity, and extend LLM bias literature by showing that geopolitical framing effects persist in downstream quality assessment tasks. We conclude that LLMbased media evaluation pipelines require cultural calibration to avoid conflating content differences with model-induced bias.Yayın Evaluation of password hashing competition finalists: performance, security, compliance mapping, and post-quantum readiness(Karyay Karadeniz Yayımcılık Ve Organizasyon Ticaret Limited Şirketi, 2025-11-15) Ulutaş, Erdem; Çeliktaş, BarışPassword hashes and key derivation functions (KDFs) are central to authentication and cryptographic security schemes crafted to defend user credentials from brute-force attacks and unauthorized access. Password hashing algorithms, for example PBKDF2, bcrypt, or scrypt, are very popular today, but are lacking in the face of modern hardware acceleration, parallel processing, and advanced cryptanalytic attacks. To contest these shortcomings, the Password Hashing Competition (PHC) was started in 2013 and had 22 candidates for functions for hashing passwords. After thorough evaluation, 9 finalists were selected based on how secure, fast, memory-friendly, flexible, and efficient these functions were. This study evaluates the nine PHC finalists—Argon2, battcrypt, Catena, Lyra2, MAKWA, Parallel, POMELO, Pufferfish, and yescrypt—through survey findings and performance benchmarks. We have evaluated these functions from an architectural standpoint and studied their security features, memory hardness, performance tradeoff, and practical usage. We also compare these finalists with traditional password hashing functions to highlight their advantages and limitations. We also investigate the post-quantum assumption for password hashing – the effectiveness of these functions against quantum assaults, their position in a new cryptography set, and the role of peppering as an additional security measure. In addition, we perform a comprehensive compliance mapping of the PHC finalists against major global standards and regulations such as NIST SP 800-63B, OWASP ASVS, PCI DSS, GDPR, KVKK, and ISO/IEC 27001, highlighting their practical suitability for secure deployment in regulated environments. Finally, we provide usage recommendations for these functions for web authentication, KDFs, and embedded platforms. This paper serves as a reference for researchers, developers, and security engineers, while also introducing a complianceaware, post-quantum-ready framework that bridges cryptographic design with regulatory and deployment needs.Yayın An analysis of enterprise-level cloud transition barriers within the Technology-Organization-Environment (TOE) framework and strategic solution proposals(Gazi Üniversitesi, 2025-10-31) Çeliktaş, Barış; Birgin, Berat; Tok, Mevlüt SerkanEnterprise-level transitions to cloud service providers are frequently delayed or disrupted due to the multilayered nature of technical, organizational, and legal barriers. This study classifies these obstacles within the TechnologyOrganization-Environment (TOE) theoretical framework and provides a comprehensive analysis. Methodologically, a triangulated data source approach was adopted, combining systematic literature review, the 2025 Flexera Cloud Report, and Cloud Adoption Framework (CAF) documentation from major providers such as AWS, Azure, and Google Cloud. Findings indicate that technological barriers particularly cryptographic complexity, cost unpredictability, and weak system integration, are the most dominant. These barriers were visually modeled, and the structural interdependencies among five core cryptographic components (key management, secure computation, algorithm selection, access control, and regulatory compliance) were illustrated through a flow diagram. By aligning FinOps and compliance-oriented solution strategies with the TOE framework, the study offers a strategic roadmap for decision-makers and cloud architects planning cloud adoption. It links conceptual models to applied practices, providing structured support for organizations seeking to mature their cloud strategy.Yayın Unsupervised textile defect detection using convolutional neural networks(Cornell Univ, 2023-11-30) Koulali, Imane; Eskil, Mustafa TanerIn this study, we propose a novel motif-based approach for unsupervised textile anomaly detection that combines the benefits of traditional convolutional neural networks with those of an unsupervised learning paradigm. It consists of five main steps: preprocessing, automatic pattern period extraction, patch extraction, features selection and anomaly detection. This proposed approach uses a new dynamic and heuristic method for feature selection which avoids the drawbacks of initialization of the number of filters (neurons) and their weights, and those of the backpropagation mechanism such as the vanishing gradients, which are common practice in the state-of-the-art methods. The design and training of the network are performed in a dynamic and input domain-based manner and, thus, no ad-hoc configurations are required. Before building the model, only the number of layers and the stride are defined. We do not initialize the weights randomly nor do we define the filter size or number of filters as conventionally done in CNN-based approaches. This reduces effort and time spent on hyperparameter initialization and fine-tuning. Only one defect-free sample is required for training and no further labeled data is needed. The trained network is then used to detect anomalies on defective fabric samples. We demonstrate the effectiveness of our approach on the Patterned Fabrics benchmark dataset. Our algorithm yields reliable and competitive results (on recall, precision, accuracy and f1- measure) compared to state-of-the-art unsupervised approaches, in less time, with efficient training in a single epoch and a lower computational cost.Yayın Variational self-supervised learning(Cornell Univ, 2025-04-06) Yavuz, Mehmet Can; Yanıkoğlu, BerrinWe present Variational Self-Supervised Learning (VSSL), a novel framework that combines variational inference with self-supervised learning to enable efficient, decoder-free representation learning. Unlike traditional VAEs that rely on input reconstruction via a decoder, VSSL symmetrically couples two encoders with Gaussian outputs. A momentum-updated teacher network defines a dynamic, data-dependent prior, while the student encoder produces an approximate posterior from augmented views. The reconstruction term in the ELBO is replaced with a cross-view denoising objective, preserving the analytical tractability of Gaussian KL divergence. We further introduce cosine-based formulations of KL and log-likelihood terms to enhance semantic alignment in high-dimensional latent spaces. Experiments on CIFAR-10, CIFAR-100, and ImageNet-100 show that VSSL achieves competitive or superior performance to leading self-supervised methods, including BYOL and MoCo V3. VSSL offers a scalable, probabilistically grounded approach to learning transferable representations without generative reconstruction, bridging the gap between variational modeling and modern self-supervised techniques.Yayın Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples(Cornell Univ, 2021-02-13) Tuna, Ömer Faruk; Çatak, Ferhat Özgür; Eskil, Mustafa TanerDeep neural network architectures are considered to be robust to random perturbations. Nevertheless, it was shown that they could be severely vulnerable to slight but carefully crafted perturbations of the input, termed as adversarial samples. In recent years, numerous studies have been conducted in this new area called "Adversarial Machine Learning" to devise new adversarial attacks and to defend against these attacks with more robust DNN architectures. However, almost all the research work so far has been concentrated on utilising model loss function to craft adversarial examples or create robust models. This study explores the usage of quantified epistemic uncertainty obtained from Monte-Carlo Dropout Sampling for adversarial attack purposes by which we perturb the input to the areas where the model has not seen before. We proposed new attack ideas based on the epistemic uncertainty of the model. Our results show that our proposed hybrid attack approach increases the attack success rates from 82.59% to 85.40%, 82.86% to 89.92% and 88.06% to 90.03% on MNIST Digit, MNIST Fashion and CIFAR-10 datasets, respectively.Yayın Sarcasm detection on news headlines using transformers(Springer, 2025-09-07) Gümüşçekiçci, Gizem; Dehkharghani, RahimSarcasm poses a linguistic challenge due to its figurative nature, where intended meaning contradicts literal interpretation. Sarcasm is prevalent in human communication, affecting interactions in literature, social media, news, e-commerce, etc. Identifying the true intent behind sarcasm is challenging but essential for applications in sentiment analysis. Detecting sarcasm in written text, as a challenging task, has attracted many researchers in recent years. This paper attempts to detect sarcasm in news headlines. Journalists prefer using sarcastic news headlines as they seem much more interesting to the readers. In the proposed methodology, we experimented with Transformers, namely the BERT model, and several Machine and Deep Learning models with different word and sentence embedding methods. The proposed approach inherently requires high-performance resources due to the use of large-scale pre-trained language models such as BERT. We also extended an existing news headlines dataset for sarcasm detection using augmentation techniques and annotating it with hand-crafted features. The proposed methodology could outperform almost all existing sarcasm detection approaches with a 98.86% F1-score when applied to the extended news headlines dataset, which we made publicly available on GitHub.Yayın Enhancing real estate listings through image classification and enhancement: a comparative study(Multidisciplinary Digital Publishing Institute (MDPI), 2025-05-22) Küp, Eyüp Tolunay; Sözdinler, Melih; Işık, Ali Hakan; Doksanbir, Yalçın; Akpınar, GökhanWe extended real estate property listings on the online prop-tech platform. On the platform, the images were classified into the specified classes according to quality criteria. The necessary interventions were made by measuring the platform’s appropriateness level and increasing the advertisements’ visual appeal. A dataset of 3000 labeled images was utilized to compare different image classification models, including convolutional neural networks (CNNs), VGG16, residual networks (ResNets), and the LLaVA large language model (LLM). Each model’s performance and benchmark results were measured to identify the most effective method. In addition, the classification pipeline was expanded using image enhancement with contrastive unsupervised representation learning (CURL). This method assessed the impact of improved image quality on classification accuracy and the overall attractiveness of property listings. For each classification model, the performance was evaluated in binary conditions, with and without the application of CURL. The results showed that applying image enhancement with CURL enhances image quality and improves classification performance, particularly in models such as CNN and ResNet. The study results enable a better visual representation of real estate properties, resulting in higher-quality and engaging user listings. They also underscore the importance of combining advanced image processing techniques with classification models to optimize image presentation and categorization in the real estate industry. The extended platform offers information on the role of machine learning models and image enhancement methods in technology for the real estate industry. Also, an alternative solution that can be integrated into intelligent listing systems is proposed in this study to improve user experience and information accuracy. The platform proves that artificial intelligence and machine learning can be integrated for cloud-distributed services, paving the way for future innovations in the real estate sector and intelligent marketplace platforms.Yayın ANN activation function estimators for homomorphic encrypted inference(Institute of Electrical and Electronics Engineers Inc., 2025-06-13) Harb, Mhd Raja Abou; Çeliktaş, BarışHomomorphic Encryption (HE) enables secure computations on encrypted data, facilitating machine learning inference in sensitive environments such as healthcare and finance. However, efficiently handling non-linear activation functions, specifically Sigmoid and Tanh, remains a significant computational challenge for encrypted inference using Artificial Neural Networks (ANNs). This study introduces a lightweight, ANN-based estimator designed to accurately approximate activation functions under homomorphic encryption. Unlike traditional polynomial and piecewise linear approximations, the proposed ANN estimators achieve superior accuracy with lower computational overhead associated with bootstrapping or high-degree polynomial techniques. These estimators are trained on plaintext data and seamlessly integrated into encrypted inference pipelines, significantly outperforming conventional methods. Experimental evaluations demonstrate notable improvements, with ANN estimators enhancing accuracy by approximately 2% for Sigmoid and up to 73% for Tanh functions, improving F1-scores by approximately 2% for Sigmoid and up to 88% for Tanh, and markedly reducing Mean Square Error (MSE) by up to 96% compared to polynomial approximations. The ANN estimator achieves an accuracy of 97.70% and an AUC of 0.9997 when integrated into a CNN architecture on the MNIST dataset, and an accuracy of 85.25% with an AUC of 0.9459 on the UCI Heart Disease dataset during ciphertext inference. These results underscore the estimator’s practical effectiveness and computational feasibility, making it suitable for secure and efficient ANN inference in encrypted environments.Yayın Relationships among organizational-level maturities in artificial intelligence, cybersecurity, and digital transformation: a survey-based analysis(Institute of Electrical and Electronics Engineers Inc., 2025-05-19) Kubilay, Burak; Çeliktaş, BarışThe rapid development of digital technology across industries has highlighted the growing need for enhanced competencies in Artificial Intelligence (AI), Cyber security (CS), and Digital Transformation (DT). While there is extensive research on each of these domains in isolation, few studies have investigated their relationship and joint impact on organizational maturity. This study aims to address this gap by analyzing the relationships among the maturity levels of AI, CS, and DT at the organizational level using Structural Equation Modeling (SEM) and descriptive statistical methods. A mixed-methods design combines quantitative survey data with synthetic modeling techniques to assess organizational preparedness. The findings demonstrate significant bidirectional correlations among AI, CS, and DT, with technology and finance being more advanced than government and education. The research highlights the necessity of an integrated AI-CS strategy and provides actionable recommendations to increase investments in these domains. In contrast to the preceding fragmented evaluations, the current research establishes a comprehensive, empirically grounded framework that acts as a strategic reference point for digital resilience. Follow-up studies will involve collecting real-world industry data in support of empirical validation and predictive ability in measuring AI and CS maturity. This research adds to the existing literature by filling the gaps among fragmented digital maturity models and providing a consistent empirical base for organizations to thrive in an evolving technological environment.Yayın Turkish sentiment analysis: a comprehensive review(Yildiz Technical University, 2024-08) Altınel Girgin, Ayşe Berna; Gümüşçekiçci, Gizem; Birdemir, Nuri CanSentiment analysis (SA) is a very popular research topic in the text mining field. SA is the process of textual mining in which the meaning of a text is detected and extracted. One of the key aspects of SA is to analyze the body of a text to determine its polarity to understand the opinions it expresses. Substantial amounts of data are produced by online resources such as social media sites, blogs, news sites, etc. Due to this reason, it is impossible to process all of this data without automated systems, which has contributed to the rise in popularity of SA in recent years. SA is considered to be extremely essential, mostly due to its ability to analyze mass opinions. SA, and Natural Language Processing (NLP) in particular, has become an overwhelmingly popular topic as social media usage has increased. The data collected from social media has sourced numerous different SA studies due to being versatile and accessible to the masses. This survey presents a comprehensive study categorizing past and present studies by their employed methodologies and levels of sentiment. In this survey, Turkish SA studies were categorized under three sections. These are Dictionary-based, Machine Learning-based, and Hybrid-based. Researchers can discover, compare, and analyze properties of different Turkish SA studies reviewed in this survey, as well as obtain information on the public dataset and the dictionaries used in the studies. The main purpose of this study is to combine Turkish SA approaches and methods while briefly explaining its concepts. This survey uniquely categorizes a large number of related articles and visualizes their properties. To the best of our knowledge, there is no such comprehensive and up-to-date survey that strictly covers Turkish SA which mainly concerns analysis of sentiment levels. Furthermore, this survey contributes to the literature due to its unique property of being the first of its kind.Yayın Automated diagnosis of Alzheimer’s Disease using OCT and OCTA: a systematic review(Institute of Electrical and Electronics Engineers Inc., 2024-08-06) Turkan, Yasemin; Tek, Faik Boray; Arpacı, Fatih; Arslan, Ozan; Toslak, Devrim; Bulut, Mehmet; Yaman, AylinRetinal optical coherence tomography (OCT) and optical coherence tomography angiography (OCTA) have emerged as promising, non-invasive, and cost-effective modalities for the early diagnosis of Alzheimer's disease (AD). However, a comprehensive review of automated deep learning techniques for diagnosing AD or mild cognitive impairment (MCI) using OCT/OCTA data is lacking. We addressed this gap by conducting a systematic review using the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) guidelines. We systematically searched databases, including Scopus, PubMed, and Web of Science, and identified 16 important studies from an initial set of 4006 references. We then analyzed these studies through a structured framework, focusing on the key aspects of deep learning workflows for AD/MCI diagnosis using OCT-OCTA. This included dataset curation, model training, and validation methodologies. Our findings indicate a shift towards employing end-to-end deep learning models to directly analyze OCT/OCTA images in diagnosing AD/MCI, moving away from traditional machine learning approaches. However, we identified inconsistencies in the data collection methods across studies, leading to varied outcomes. We emphasize the need for longitudinal studies on early AD and MCI diagnosis, along with further research on interpretability tools to enhance model accuracy and reliability for clinical translation.Yayın Mixed-reality photogrammetry in focus(Reed Business-Geo, 2024) Kemper, Gerhard; Torkut, Çağın; Akça, Devrim; Grün, ArminThe use of mixed reality in photogrammetry software has resulted in a real -time 3D inspection system that allows users to remotely access, visualize and measure the stereoscopic model simultaneously. An on-site operator runs a drone equipped with a stereo camera, and thanks to virtual reality headsets experts can observe the object of interest without leaving the office. This improves both cost-effectiveness and safety when inspecting large, critical structures.Yayın Mental disorder and suicidal ideation detection from social media using deep neural networks(Springer, 2024-12) Ezerceli, Özay; Dehkharghani, RahimDepression and suicidal ideation are global reasons for life-threatening injury and death. Mental disorders have increased especially among young people in recent years, and early detection of those cases can prevent suicide attempts. Social media platforms provide users with an anonymous space to interact with others, making them a secure environment to discuss their mental disorders. This paper proposes a solution to detect depression/suicidal ideation using natural language processing and deep learning techniques. We used Transformers and a unique model to train the proposed model and applied it to three diferent datasets: SuicideDetection, CEASEv2.0, and SWMH. The proposed model is evaluated using the accuracy, precision, recall, and ROC curve. The proposed model outperforms the state-of-theart in the SuicideDetection and CEASEv2.0 datasets, achieving F1 scores of 0.97 and 0.75, respectively. However, in the SWMH data set, the proposed model is 4% points behind the state-of-the-art precision providing the F1 score of 0.68. In the real world, this project could help psychologists in the early detection of depression and suicidal ideation for a more efcient treatment. The proposed model achieves state-of-the-art performance in two of the three datasets, so they could be used to develop a screening tool that could be used by mental health professionals or individuals to assess their own risk of suicide. This could lead to early intervention and treatment, which could save lives.Yayın Text-to-SQL: a methodical review of challenges and models(TÜBİTAK, 2024-05-20) Kanburoğlu, Ali Buğra; Tek, Faik BorayThis survey focuses on Text-to-SQL, automated translation of natural language queries into SQL queries. Initially, we describe the problem and its main challenges. Then, by following the PRISMA systematic review methodology, we survey the existing Text-to-SQL review papers in the literature. We apply the same method to extract proposed Text-to-SQL models and classify them with respect to used evaluation metrics and benchmarks. We highlight the accuracies achieved by various models on Text-to-SQL datasets and discuss execution-guided evaluation strategies. We present insights into model training times and implementations of different models. We also explore the availability of Text-to-SQL datasets in non-English languages. Additionally, we focus on large language model (LLM) based approaches for the Text-to-SQL task, where we examine LLM-based studies in the literature and subsequently evaluate the LLMs on the cross-domain Spider dataset. Finally, we conclude with a discussion of future directions for Text-to-SQL research, identifying potential areas of improvement and advancements in this field.Yayın Analyst-aware incident assignment in security operations centers: a multi-factor prioritization and optimization framework(Uğur Şen, 2025-07-15) Kılınçdemir, Eyüp Can; Çeliktaş, BarışIn this paper, we propose a comprehensive and scalable framework for incident assignment and prioritization in Security Operations Centers (SOCs). The proposed model aims to optimize SOC workflows by addressing key operational challenges such as analyst fatigue, alert overload, and inconsistent incident handling. Our framework evaluates each incident using a multi-factor scoring model that incorporates incident severity, service-level agreement (SLA) urgency, incident type, asset criticality, threat intelligence indicators, frequency of repetition, and a correlation score derived from historical incident data. We formalize this evaluation through a set of mathematical functions that compute a dynamic incident score and derive incident complexity. In parallel, analyst profiles are quantified using Analyst Load Factor (ALF) and Experience Match Factor (EMF), two novel metrics that account for both workload distribution and expertise alignment. The incident–analyst matching process is expressed as a constrained optimization problem, where the final assignment score is computed by balancing incident priority with analyst suitability. This formulation enables automated, real-time assignment of incidents to the most appropriate analysts, while ensuring both operational fairness and triage precision. The model is validated using algorithmic pseudocode, scoring tables, and a simplified case study, which illustrates the realworld applicability and decision logic of the framework in large-scale SOC environments. To validate the framework under real-world conditions, an empirical case study was conducted using 10 attack scenarios from the CICIDS2017 benchmark dataset. Overall, our contributions lie in the formalization of a dual-factor analyst scoring scheme and the integration of contextual incident features into an adaptive, rule-based assignment framework. To further strengthen operational value, future work will explore adaptive weighting mechanisms and integration with real-time SIEM pipelines. Additionally, feedback loops and supervised learning models will be incorporated to continuously refine analyst-incident matching and prioritization.












