11 sonuçlar
Arama Sonuçları
Listeleniyor 1 - 10 / 11
Yayın Electroencephalography signatures associated with developmental dyslexia identified using principal component analysis(Multidisciplinary Digital Publishing Institute (MDPI), 2025-08-27) Eroğlu, Günet; Harb, Mhd Raja AbouBackground/Objectives: Developmental dyslexia is characterised by neuropsychological processing deficits and marked hemispheric functional asymmetries. To uncover latent neurophysiological features linked to reading impairment, we applied dimensionality reduction and clustering techniques to high-density electroencephalographic (EEG) recordings. We further examined the functional relevance of these features to reading performance under standardised test conditions. Methods: EEG data were collected from 200 children (100 with dyslexia and 100 age- and IQ-matched typically developing controls). Principal Component Analysis (PCA) was applied to high-dimensional EEG spectral power datasets to extract latent neurophysiological components. Twelve principal components, collectively accounting for 84.2% of the variance, were retained. K-means clustering was performed on the PCA-derived components to classify participants. Group differences in spectral power were evaluated, and correlations between principal component scores and reading fluency, measured by the TILLS Reading Fluency Subtest, were computed. Results: K-means clustering trained on PCA-derived features achieved a classification accuracy of 89.5% (silhouette coefficient = 0.67). Dyslexic participants exhibited significantly higher right parietal–occipital alpha (P8) power compared to controls (mean = 3.77 ± 0.61 vs. 2.74 ± 0.56; p < 0.001). Within the dyslexic group, PC1 scores were strongly negatively correlated with reading fluency (r = −0.61, p < 0.001), underscoring the functional relevance of EEG-derived components to behavioural reading performance. Conclusions: PCA-derived EEG patterns can distinguish between dyslexic and typically developing children with high accuracy, revealing spectral power differences consistent with atypical hemispheric specialisation. These results suggest that EEG-derived neurophysiological features hold promise for early dyslexia screening. However, before EEG can be firmly established as a reliable molecular biomarker, further multimodal research integrating EEG with immunological, neurochemical, and genetic measures is warranted.Yayın Extracting meaningful information student surveys with NLP(Işık Üniversitesi, Lisansüstü Eğitim Enstitüsü, 2025-01-29) Pourjalil, Kajal; Ekin, Emine; Işık Üniversitesi, Lisansüstü Eğitim Enstitüsü, Bilgisayar Mühendisliği Yüksek Lisans Programı; Işık University, School of Graduate Studies, Master’s Program in Computer EngineeringThis thesis applied NLP techniques to analyze and summarize bilingual student feedback collected via end-of-semester surveys. The dataset, which contained open-ended responses in both English and Turkish, required a model adept at preserving linguistic nuances across languages. The Llama 2-7b-hf model, which had been trained explicitly for text generation, was selected for its capability to produce coherent and contextually relevant summaries. Data preprocessing involved organizing metadata such as department, semester, course name, and section number, segregating comments by word count, and removing personal identifiers to ensure privacy. Shorter comments (fewer than ten words) were grouped and summarized using a pipeline from the Transformers library, while longer comments were fine-tuned with metadataspecific prompts for detailed summarization. To further enhance analysis, sentiment classification was performed using the “cardiffnlp/twitter-robertabase-sentiment” model, categorizing feedback into negative, neutral, and positive sentiments. Evaluation metrics included expert reviews, contextual relevance, and logical consistency with the dataset’s sentiment distribution. Compared to previous models, the Llama 2 model demonstrated superior performance in generating complete, coherent summaries while preserving the overall intent and tone of the comments. Ultimately, this research highlighted the effectiveness of LLMs in processing multilingual educational data and their potential to provide actionable insights for improving course content and student experiences.Yayın Image super resolution using deep learning techniques(Işık Üniversitesi, Lisansüstü Eğitim Enstitüsü, 2024-09-02) El Ballouti, Salah Eddine; Eskil, Mustafa Taner; Işık Üniversitesi, Lisansüstü Eğitim Enstitüsü, Bilgisayar Mühendisliği Yüksek Lisans Programı; Işık University, School of Graduate Studies, Master’s Program in Computer EngineeringImage SR using Deep Learning Techniques has become a critical area of research, with significant progress in improving image quality and detail. This thesis examines and contrasts eight advanced deep learning-based SR methods: CARN, EDSR, ESPCN, RCAN, RDN, SRCNN, SRGAN, and VDSR, using the DIV2K dataset. The evaluation covers multiple aspects to offer a thorough understanding of each method's effectiveness, efficiency, and structure. Performance measurements such as PSNR and SSIM are utilized for evaluating the fidelity of super-resolved images. Computational efficiency is evaluated based on inference time and memory requirements. Training time is analyzed, taking into account the speed of convergence for training on the DIV2K dataset. Model complexity is examined, exploring architectural details such as network depth, and the integration of specialized elements like residual blocks and attention mechanisms. Additionally, the thesis explains in a clear and detailed manner the trade-offs between performance and complexity, discussing whether more complex architectures deliver significantly better results compared to simpler models and whether the computational cost justifies the improvements. Finally, a qualitative comparison is conducted to emphasize the strengths and weaknesses of each technique. Through this comprehensive analysis, this thesis offers insights into the field of deep learning-based image SR, assisting researchers and practitioners in choosing the most appropriate method for various applications.Yayın Federated hybrid privacy-preserving movie recommendation system for internet-of-vehicles(Işık Üniversitesi, Lisansüstü Eğitim Enstitüsü, 2024-08-02) Şimşek, Musa; Erman Tüysüz, Ayşegül; Işık Üniversitesi, Lisansüstü Eğitim Enstitüsü, Bilgisayar Mühendisliği Yüksek Lisans Programı; Işık University, School of Graduate Studies, Master’s Program in Computer EngineeringIn this research, we introduced a pioneering strategy to address the pressing privacy concerns associated with vehicular movie recommendation systems. As the demand for personalized entertainment options in vehicles increases, so does the need to protect user data. To tackle this challenge, we utilized the PyTorch framework to create a robust foundation from scratch. A key component of our approach was the addition of Laplace noise during the training process, which ensured differential privacy. This technique effectively safeguarded user data while simultaneously optimizing model performance, allowing us to maintain high levels of recommendation accuracy. Furthermore, we employed the Optuna hyperparameter optimization framework, which played a crucial role in enhancing the model's performance. By fine-tuning various parameters, we were able to elevate the overall efficiency of the system beyond the capabilities of the base model. Our extensive experimentation utilized the Movielens-1M benchmark movie dataset, which provided a solid basis for evaluating our approach. The results demonstrated a significant improvement over baseline models, validating the effectiveness of our privacy-preserving vehicular movie recommendation system. In addition to our centralised model, we conducted a comprehensive comparison with practical federated frameworks, including FedAvg, FedProx, and FedMedian. Our findings revealed that all federated models outperformed the centralised models by at least 2%, while also exhibiting shorter runtimes.Yayın ANN activation function estimators for homomorphic encrypted inference(Institute of Electrical and Electronics Engineers Inc., 2025-06-13) Harb, Mhd Raja Abou; Çeliktaş, BarışHomomorphic Encryption (HE) enables secure computations on encrypted data, facilitating machine learning inference in sensitive environments such as healthcare and finance. However, efficiently handling non-linear activation functions, specifically Sigmoid and Tanh, remains a significant computational challenge for encrypted inference using Artificial Neural Networks (ANNs). This study introduces a lightweight, ANN-based estimator designed to accurately approximate activation functions under homomorphic encryption. Unlike traditional polynomial and piecewise linear approximations, the proposed ANN estimators achieve superior accuracy with lower computational overhead associated with bootstrapping or high-degree polynomial techniques. These estimators are trained on plaintext data and seamlessly integrated into encrypted inference pipelines, significantly outperforming conventional methods. Experimental evaluations demonstrate notable improvements, with ANN estimators enhancing accuracy by approximately 2% for Sigmoid and up to 73% for Tanh functions, improving F1-scores by approximately 2% for Sigmoid and up to 88% for Tanh, and markedly reducing Mean Square Error (MSE) by up to 96% compared to polynomial approximations. The ANN estimator achieves an accuracy of 97.70% and an AUC of 0.9997 when integrated into a CNN architecture on the MNIST dataset, and an accuracy of 85.25% with an AUC of 0.9459 on the UCI Heart Disease dataset during ciphertext inference. These results underscore the estimator’s practical effectiveness and computational feasibility, making it suitable for secure and efficient ANN inference in encrypted environments.Yayın Intelligent health monitoring in 6G networks: machine learning-enhanced VLC-based medical body sensor networks(Multidisciplinary Digital Publishing Institute (MDPI), 2025-05-23) Antaki, Bilal; Dalloul, Ahmed Hany; Miramirkhani, FarshadRecent advances in Artificial Intelligence (AI)-driven wireless communication are driving the adoption of Sixth Generation (6G) technologies in crucial environments such as hospitals. Visible Light Communication (VLC) leverages existing lighting infrastructure to deliver high data rates while mitigating electromagnetic interference (EMI); however, patient movement induces fluctuating signal strength and dynamic channel conditions. In this paper, we present a novel integration of site-specific ray tracing and machine learning (ML) for VLC-enabled Medical Body Sensor Networks (MBSNs) channel modeling in distinct hospital settings. First, we introduce a Q-learning-based adaptive modulation scheme that meets target symbol error rates (SERs) in real time without prior environmental information. Second, we develop a Long Short-Term Memory (LSTM)-based estimator for path loss and Root Mean Square (RMS) delay spread under dynamic hospital conditions. To our knowledge, this is the first study combining ray-traced channel impulse response modeling (CIR) with ML techniques in hospital scenarios. The simulation results demonstrate that the Q-learning method consistently achieves SERs with a spectral efficiency (SE) lower than optimal near the threshold. Furthermore, LSTM estimation shows that D1 has the highest Root Mean Square Error (RMSE) for path loss (1.6797 dB) and RMS delay spread (1.0567 ns) in the Intensive Care Unit (ICU) ward, whereas D3 exhibits the highest RMSE for path loss (1.0652 dB) and RMS delay spread (0.7657 ns) in the Family-Type Patient Rooms (FTPRs) scenario, demonstrating high estimation accuracy under realistic conditions.Yayın Supervised decision making in forex investment using ML and DL classification methods(Işık Üniversitesi, 2023-07-20) Jiroudi, Abdullah; Eskil, Mustata Taner; Işık Üniversitesi, Lisansüstü Eğitim Enstitüsü, Bilgisayar Mühendisliği Yüksek Lisans Programı; Işık University, School of Graduate Studies, Master’s Program in Computer EngineeringThe suggested trading system offers an approach that takes into account the complexity and high trading volume of the foreign exchange (FX0) market. Its main objective is to address the challenges faced by traders in the GBP/JPY currency pair and assist them in making quick decisions. To achieve this, machine learning and deep learning techniques are integrated to propose a trading algorithm. The proposed algorithm works by combining data from different time intervals. The Long Short-Term Memory (LSTM) model is used to predict indicator values, while the XGBoost classifier is employed to determine trading decisions. This method aims to adapt to rapidly changing patterns in the forex market and enables the detection of subtle changes in price dynamics through a sliding window training approach. Experiments conducted have shown promising results for the suggested trading system. Positive outcomes have been obtained in terms of capital growth and prediction accuracy. However, since this method is highly risky and requires further development in terms of risk management, the inclusion of risk management techniques and algorithm optimization is targeted. This study contributes to the improvement of trading strategies while bridging the gap between researchers and traders. It also demonstrates the potential of machine learning and deep learning techniques to enhance decision-making processes in financial markets. This trading system offers traders a range of advantages. The utilization of machine learning and deep learning techniques enables rapid analysis of large amounts of data and decision-making capabilities. Additionally, by combining data from different time intervals, it becomes possible to evaluate long-term trends and short-term fluctuations more effectively. In conclusion, the suggested trading system empowers traders to be competitive in the forex market and achieve better outcomes. Furthermore, it contributes to the increased utilization of machine learning and deep learning techniques in financial markets and encourages further research in the field.Yayın Analyst-aware incident assignment in security operations centers: a multi-factor prioritization and optimization framework(Uğur Şen, 2025-07-15) Kılınçdemir, Eyüp Can; Çeliktaş, BarışIn this paper, we propose a comprehensive and scalable framework for incident assignment and prioritization in Security Operations Centers (SOCs). The proposed model aims to optimize SOC workflows by addressing key operational challenges such as analyst fatigue, alert overload, and inconsistent incident handling. Our framework evaluates each incident using a multi-factor scoring model that incorporates incident severity, service-level agreement (SLA) urgency, incident type, asset criticality, threat intelligence indicators, frequency of repetition, and a correlation score derived from historical incident data. We formalize this evaluation through a set of mathematical functions that compute a dynamic incident score and derive incident complexity. In parallel, analyst profiles are quantified using Analyst Load Factor (ALF) and Experience Match Factor (EMF), two novel metrics that account for both workload distribution and expertise alignment. The incident–analyst matching process is expressed as a constrained optimization problem, where the final assignment score is computed by balancing incident priority with analyst suitability. This formulation enables automated, real-time assignment of incidents to the most appropriate analysts, while ensuring both operational fairness and triage precision. The model is validated using algorithmic pseudocode, scoring tables, and a simplified case study, which illustrates the realworld applicability and decision logic of the framework in large-scale SOC environments. To validate the framework under real-world conditions, an empirical case study was conducted using 10 attack scenarios from the CICIDS2017 benchmark dataset. Overall, our contributions lie in the formalization of a dual-factor analyst scoring scheme and the integration of contextual incident features into an adaptive, rule-based assignment framework. To further strengthen operational value, future work will explore adaptive weighting mechanisms and integration with real-time SIEM pipelines. Additionally, feedback loops and supervised learning models will be incorporated to continuously refine analyst-incident matching and prioritization.Yayın Electrophysiological signatures of developmental dyslexia: towards EEG-based biomarker identification and neurogenetic correlates(MDPI, 2025-06-30) Eroğlu, Günet; Harb, Mhd Raja AbouDyslexia is a neurodevelopmental disorder characterized by altered hemispheric specialization and disrupted phonological processing. In this study, we applied Principal Component Analysis (PCA) to high-dimensional electroencephalographic (EEG) recordings from 200 children (100 dyslexic, 100 controls) to extract latent neurophysiological features associated with reading impairment. Our findings revealed significant right-hemisphere dominance in dyslexic individuals, particularly in the P8 electrode within the alpha band, consistent with compensatory neural strategies. Despite the absence of clinical comorbidities or medication use, distinct clustering emerged, supporting the utility of PCA for early screening. Future directions include correlating EEG-derived features with known dyslexia-related gene expression profiles (e.g., DCDC2, KIAA0319), neurotransmitter imbalances, and neuroinflammatory markers. These integrative analyses may establish EEG signals as reliable, non-invasive biomarkers for molecular-level screening in developmental learning disorders.Yayın Geopolitical parallax: beyond Walter Lippmann just after large language models(Cornell Univ, 2025-08-27) Yavuz, Mehmet Can; Kabir, Humza Gohar; Özkan, AylinObjectivity in journalism has long been contested, oscillating between ideals of neutral, fact-based reporting and the inevitability of subjective framing. With the advent of large language models (LLMs), these tensions are now mediated by algorithmic systems whose training data and design choices may themselves embed cultural or ideological biases. This study investigates geopolitical parallax—systematic divergence in news quality and subjectivity assessments—by comparing articlelevel embeddings from Chinese-origin (Qwen, BGE, Jina) and Western-origin (Snowflake, Granite) model families. We evaluate both on a human-annotated news quality benchmark spanning fifteen stylistic, informational, and affective dimensions, and on parallel corpora covering politically sensitive topics, including Palestine and reciprocal China–United States coverage. Using logistic regression probes and matched-topic evaluation, we quantify per-metric differences in predicted positive-class probabilities between model families. Our findings reveal consistent, nonrandom divergences aligned with model origin. In Palestinerelated coverage, Western models assign higher subjectivity and positive emotion scores, while Chinese models emphasize novelty and descriptiveness. Cross-topic analysis shows asymmetries in structural quality metrics—Chinese-on-US scoring notably lower in fluency, conciseness, technicality, and overall quality—contrasted by higher negative emotion scores. These patterns align with media bias theory and our distinction between semantic, emotional, and relational subjectivity, and extend LLM bias literature by showing that geopolitical framing effects persist in downstream quality assessment tasks. We conclude that LLMbased media evaluation pipelines require cultural calibration to avoid conflating content differences with model-induced bias.












