7 sonuçlar
Arama Sonuçları
Listeleniyor 1 - 7 / 7
Yayın Uyarlanır yerel bağlı katman kullanan dikkat tabanlı derin ağ ile sesli komut tanıma(Institute of Electrical and Electronics Engineers Inc., 2020-10-05) Turkan, Yasemin; Tek, Faik BoraySesli komut tanıma insan-makine ara yüzüyle ilişkili aktif bir araştırma konusudur. Dikkat tabanlı derin ağlar ile bu tür problemler başarılı bir şekilde çözülebilmektedir. Bu çalışmada, var olan bir dikkat tabanlı derin ağ yöntemi, uyarlanır yerel bağlı (odaklanan) katman kullanılarak daha da geliştirilmiştir. Orijinal yönteminde sınandığı Google ve Kaggle sesli komut veri setlerinde karşılaştırmalı olarak yapılan deneylerde önerdiğimiz uyarlanır yerel bağlı katman kullanan dikkat tabanlı ağın tanıma doğruluğunu %2.6 oranında iyileştirdiği gözlemledik.Yayın Segmentation based classification of retinal diseases in OCT images(Institute of Electrical and Electronics Engineers Inc., 2024) Eren, Öykü; Tek, Faik Boray; Turkan, YaseminVolumetric optical coherence tomography (OCT) scans offer detailed visualization of the retinal layers, where any deformation can indicate potential abnormalities. This study introduced a method for classifying ocular diseases in OCT images through transfer learning. Applying transfer learning from natural images to Optical Coherence Tomography (OCT) scans present challenges, particularly when target domain examples are limited. Our approach aimed to enhance OCT-based retinal disease classification by leveraging transfer learning more effectively. We hypothesize that providing an explicit layer structure can improve classification accuracy. Using the OCTA-500 dataset, we explored various configurations by segmenting the retinal layers and integrating these segmentations with OCT scans. By combining horizontal and vertical cross-sectional middle slices and their blendings with segmentation outputs, we achieved a classification a ccuracy of 91.47% and an Area Under the Curve (AUC) of 0.96, significantly outperforming the classification of OCT slice images.Yayın Retinal disease classification using optical coherence tomography angiography images(Institute of Electrical and Electronics Engineers Inc., 2024) Aydın, Ömer Faruk; Nazlı, Muhammet Serdar; Tek, Faik Boray; Turkan, YaseminOptical Coherence Tomography Angiography (OCTA) is a non-invasive imaging modality widely used for the detailed visualization of retinal microvasculature, which is crucial for diagnosing and monitoring various retinal diseases. However, manual interpretation of OCTA images is labor-intensive and prone to variability, highlighting the need for automated classification methods. This study presents an aproach that utilizes transfer learning to classify OCTA images into different retinal disease categories, including age-related macular degeneration (AMD) and diapethic retinopathy (DR). We used the OCTA-500 dataset [1], the largest publicly available retinal dataset that contains images from 500 subjects with diverse retinal conditions. To address the class imbalance, we employed k-fold cross-validation and grouped various other conditions under the 'OTHERS' class. Additionally, we compared the performance of the ResNet50 model with OCTA inputs to that of the ResNet50 and RetFound (Vision Transformer) models with OCT inputs to assess the efficiency of OCTA in retinal condition classification. In the three-class (AMD, D R, Normal) classification, ResNet50-OCTA o utperformed ResNet50-OCT, but slightly underperformed compared to RetFound-OCT, which was pretrained on a large OCT dataset. In the four-class (AMD, DR, Normal, Others) classification, ResNet50-OCTA and RetFound-OCT achieved similar classification a ccuracies. This study establishes a baseline for retinal condition classification using the OCTA-500 dataset and provides a comparison between OCT and OCTA input modalities.Yayın Automated diagnosis of Alzheimer’s Disease using OCT and OCTA: a systematic review(Institute of Electrical and Electronics Engineers Inc., 2024-08-06) Turkan, Yasemin; Tek, Faik Boray; Arpacı, Fatih; Arslan, Ozan; Toslak, Devrim; Bulut, Mehmet; Yaman, AylinRetinal optical coherence tomography (OCT) and optical coherence tomography angiography (OCTA) have emerged as promising, non-invasive, and cost-effective modalities for the early diagnosis of Alzheimer's disease (AD). However, a comprehensive review of automated deep learning techniques for diagnosing AD or mild cognitive impairment (MCI) using OCT/OCTA data is lacking. We addressed this gap by conducting a systematic review using the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) guidelines. We systematically searched databases, including Scopus, PubMed, and Web of Science, and identified 16 important studies from an initial set of 4006 references. We then analyzed these studies through a structured framework, focusing on the key aspects of deep learning workflows for AD/MCI diagnosis using OCT-OCTA. This included dataset curation, model training, and validation methodologies. Our findings indicate a shift towards employing end-to-end deep learning models to directly analyze OCT/OCTA images in diagnosing AD/MCI, moving away from traditional machine learning approaches. However, we identified inconsistencies in the data collection methods across studies, leading to varied outcomes. We emphasize the need for longitudinal studies on early AD and MCI diagnosis, along with further research on interpretability tools to enhance model accuracy and reliability for clinical translation.Yayın Retinal disease classification from bimodal OCT and OCTA using a CNN-ViT hybrid architecture(Institute of Electrical and Electronics Engineers Inc., 2025-09-21) Aydın, Ömer Faruk; Tek, Faik Boray; Turkan, YaseminRetinal diseases are the leading cause of vision impairment and blindness worldwide. Early and accurate diagnosis is critical for effective treatment, and recent advances in imaging technologies such as Optical Coherence Tomography (OCT) and OCT Angiography (OCTA), have enabled detailed visualization of the retinal structure and vasculature. By leveraging these modalities, this study proposes an advanced deep learning architecture called MultiModalNet for automated multi-class retinal disease classification. MultiModalNet employs a dual-branch design, where OCTA projection maps are processed through a ResNet101 encoder, and cross-sectional slices from the OCT volume (B-scans) are analyzed using a Vision Transformer (ViT-Large). The extracted features from both branches were fused and passed through the fully connected layers for the final classification. Evaluated on the 3-class OCTA-500 dataset, which includes Age-related Macular Degeneration (AMD), Diabetic Retinopathy (DR), and Normal cases, the proposed model achieved state-of-the-art classification accuracy of 94.59 percent, significantly o utperforming single-modality baselines. This result highlights the effectiveness of integrating vascular and structural information to improve the diagnostic performance. The findings suggest that hybrid multi-modal deep learning approaches can play a transformative role in computer-aided ophthalmology, enhancing both clinical decision-making and screening workflows.Yayın Self-supervised learning of 3D structure from 2D OCT slices for retinal disease diagnosis on UK biobank scans(Institute of Electrical and Electronics Engineers Inc., 2025-09-21) Nazlı, Muhammet Serdar; Turkan, Yasemin; Tek, Faik BorayThis study presents a self-supervised learning framework for retinal disease classification using Optical Coherence Tomography (OCT) scans. To balance the contextual richness of 3D volumes with the computational efficiency of 2D architectures, we introduce a quasi-3D input generation strategy. Each input is constructed by stacking three OCT slices, sampled from channel-specific Gaussian distributions centered on the volume midplane, and arranged in a standard three-channel 2D format compatible with existing pre-trained models. These quasi-3D images are used to pre-train a Vision Transformer (ViT-Base) via a Masked Autoencoder (MAE) with a shared masking pattern, encouraging the model to reconstruct masked regions by encoding anatomical continuity across slices. Pre-training is conducted on 10,000 unlabeled OCT volumes from the UK Biobank. The encoder is then fine-tuned on the OCTA-500 dataset for three-class and four-class retinal disease classification tasks, including macular degeneration and diabetic retinopathy. The model achieves 92.57% accuracy on the three-class task, matching the performance of RETFound while using over 150 times less pre-training data and a smaller backbone.Yayın Deep learning-based analysis of retinal OCT scans for detection of Alzheimer’s disease(Işık Üniversitesi, Lisansüstü Eğitim Enstitüsü, 2026-01-23) Turkan, Yasemin; Tek, Faik Boray; Işık Üniversitesi, Lisansüstü Eğitim Enstitüsü, Bilgisayar Mühendisliği Doktora Programı; Işık University, School of Graduate Studies, Ph.D. in Computer EngineeringAlterations in retinal layer thickness have been associated with neurodegenerative diseases such as Alzheimer’s disease (AD). These structural changes can be measured using a noninvasive imaging technology called Optical Coherence Tomography (OCT). Previous research has mostly focused on the statistical associations between segmented retinal layer thickness and AD derived from OCT or OCTA devices. Unlike conventional medical image classification tasks, early detection is more challenging than diagnosis because imaging precedes clinical diagnosis by several years. Deep learning (DL), particularly through convolutional neural networks (CNNs) and transfer learning, has demonstrated strong performance in image-based disease detection tasks. However, the application of DL directly on unsegmented raw OCT B-scan images for early AD detection remains underexplored. Therefore, in this thesis, we address this research gap by proposing a deep learning-based approach that uses raw OCT images for early Alzheimer’s disease detection. All related studies in the literature have heavily relied on private and in-situ cohorts that lack interoperability. In contrast, the UK Biobank (2022) offers a unique resource for investigating the associations between retinal structure and systemic health, comprising over 85,000 OCT scans linked to cognitive and health-related data. Between the initial scan period (2010–2015) and July 2023, 539 participants in the dataset were diagnosed with AD. Although the UK Biobank is somewhat limited by the absence of OCTA scans, we utilized this dataset to detect early AD using OCT scans. After a rigorous data-exclusion process, this study used a targeted 4-year window, selecting participants diagnosed with AD within 4 years of their baseline assessments. The AD group was matched by age, sex, eye, and instance with a randomly selected balanced Healthy Control group (N = 30). We first evaluated the predictive value of isolated 2D B-scans using pretrained deep learning architectures. In these tests, the ResNet-34 model achieved a mean AUC of 0.624 ± 0.060. Saliency map analysis of these B-scans highlighted the critical importance of the central macular region, whereas peripheral areas showed negligible contribution to the model’s decision. To overcome the limitations of isolated B-scans and leverage 3D information, we generated a 3D-informed en-face thickness projection map from the OCT B-scans. This pipeline was optimized to focus on the diagnostically relevant 3 mm inner macular region, effectively filtering out peripheral noise. Our study of thickness maps identified the Ganglion Cell Layer (GCL) as the most significant indicator of preclinical AD. The VGG-19 model, trained on GCL thickness maps with a year-weighted loss function, achieved a peak mean AUC of 0.750 ± 0.037. Notably, the traditional clinical benchmark, the Retinal Nerve Fiber Layer (RNFL), exhibited negligible predictive value in this pre-symptomatic cohort. We also developed a Multi-Modal Soft-Voting Ensemble model to further increase predictive accuracy and emulate clinical decision-making. This model integrates structural insights from B-scans and GCIPL thickness maps with clinical and demographic data. The ensemble approach achieved the highest mean AUC of 0.85 and significantly outperformed individual modalities. Furthermore, an ablation study using only image modalities (B-scans and thickness maps) yielded an AUC of 0.84. This result highlights the strong complementary value of combined structural data. Longitudinal sensitivity analysis also established a “diagnostic horizon” for retinal biomarkers. We observed that predictive accuracy is highest between 4 and 8 years prior to clinical diagnosis. However, these signals progressively converge toward baseline by the 12-year mark. When benchmarked against the current literature, our framework outperformed existing baselines for the diagnosis of symptomatic Mild Cognitive Impairment (MCI). This demonstrates its robustness in the more challenging task of preclinical prediction. Consequently, it establishes a viable pathway for integrating retinal imaging into the early diagnostic pipeline for Alzheimer’s disease.












