3 sonuçlar
Arama Sonuçları
Listeleniyor 1 - 3 / 3
Yayın Assessing dyslexia with machine learning: a pilot study utilizing Google ML Kit(IEEE, 2023-12-19) Eroğlu, Günet; Harb, Mhd Raja AbouIn this study, we explore the application of Google ML Kit, a machine learning development kit, for dyslexia detection in the Turkish language. We collected face-tracking data from two groups: 49 dyslexic children and 22 typically developing children. Using Google ML Kit and other machine learning algorithms based on eye-tracking data, we compared their performance in dyslexia detection. Our findings reveal that Google ML Kit achieved the highest accuracy among the tested methods. This study underscores the potential of machine learning-based dyslexia detection and its practicality in academic and clinical settings.Yayın Efficient estimation of Sigmoid and Tanh activation functions for homomorphically encrypted data using Artificial Neural Networks(Institute of Electrical and Electronics Engineers Inc., 2024) Harb, Mhd Raja Abou; Çeliktaş, BarışThis paper presents a novel approach to estimating Sigmoid and Tanh activation functions using Artificial Neural Networks (ANN) optimized for homomorphic encryption. The proposed method is compared against second-degree polynomial and Piecewise Linear approximations, demonstrating a minor loss in accuracy while maintaining computational efficiency. Our results suggest that the ANN-based estimator is a viable alternative for secure machine learning models requiring privacypreserving computation.Yayın Assessing ChatGPT's accuracy in dyslexia inquiry(Institute of Electrical and Electronics Engineers Inc., 2024) Eroğlu, Günet; Harb, Mhd Raja AbouDyslexia poses challenges in accessing reliable information, crucial for affected individuals and their families. Leveraging chatbot technology offers promise in this regard. This study evaluates the OpenAI Assistant's precision in addressing dyslexia-related inquiries. Three hundred questions commonly posed by parents were categorized and presented to the Assistant. Expert evaluation of responses, graded on accuracy and completeness, yielded consistently high scores (median=5). Descriptive questions scored higher (average=4.9568) than yes/no questions (average=4.8957), indicating potential response challenges. Statistical analysis highlighted the significance of question specificity in response quality. Despite occasional difficulties, the Assistant demonstrated adaptability and reliability in providing accurate dyslexia-related information.












