Arama Sonuçları

Listeleniyor 1 - 2 / 2
  • Yayın
    An observation based muscle model for simulation of facial expressions
    (Elsevier Science BV, 2018-05) Erkoç, Tuğba; Ağdoğan, Didem; Eskil, Mustafa Taner
    This study presents a novel facial muscle model for coding of facial expressions. We derive this model from unintrusive observation of human subjects in the progress of the surprise expression. We use a generic and single-layered face model which embeds major muscles of the human face. This model is customized onto the human subject's face on the first frame of the video. The last frame of the video is used to project a set of manually marked feature points to estimate the 3 dimensional displacements of vertices due to facial expression. Vertex displacements are used in a mass spring model to estimate the external forces, i.e. the muscle forces on the skin. We observed that the distribution of muscle forces resemble sigmoid or hyperbolic tangent functions. We chose hyperbolic tangent function as our base model and parameterized it using least squares. We compared the proposed muscle model with frequently used models in the literature.
  • Yayın
    A novel similarity based unsupervised technique for training convolutional filters
    (IEEE, 2023-05-17) Erkoç, Tuğba; Eskil, Mustata Taner
    Achieving satisfactory results with Convolutional Neural Networks (CNNs) depends on how effectively the filters are trained. Conventionally, an appropriate number of filters is carefully selected, the filters are initialized with a proper initialization method and trained with backpropagation over several epochs. This training scheme requires a large labeled dataset, which is costly and time-consuming to obtain. In this study, we propose an unsupervised approach that extracts convolutional filters from a given dataset in a self-organized manner by processing the training set only once without using backpropagation training. The proposed method allows for the extraction of filters from a given dataset in the absence of labels. In contrast to previous studies, we no longer need to select the best number of filters and a suitable filter weight initialization scheme. Applying this method to the MNIST, EMNIST-Digits, Kuzushiji-MNIST, and Fashion-MNIST datasets yields high test performances of 99.19%, 99.39%, 95.03%, and 90.11%, respectively, without applying backpropagation training or using any preprocessed and augmented data.