Arama Sonuçları

Listeleniyor 1 - 2 / 2
  • Yayın
    Driver recognition and driver verification using data mining technigues
    (Işık Üniversitesi, 2007-09-25) Benli, Kristin Surpuhi; Eskil, Mustafa Taner; Işık Üniversitesi, Fen Bilimleri Enstitüsü, Bilgisayar Mühendisliği Yüksek Lisans Programı
    In this thesis we present our research in driver recognition and driver verification. The goal of this study is to investigate the affect of different classifier fusion techniques on the performance of driver recognition and driver verification. We are using five different driving behavior signals for identifying the driver identities. Driving features were extracted from these signals and Gaussian Mixture Models were used for modeling the driver behavior. Gaussian Mixture Model training was performed using the well-known EM algorithm. In recognition study posterior probabilities of identities called scores were obtained with the given test data. These scores were combined using different fixed and trainable (adaptive) combination methods. In verification study we compared posterior probabilities with fixed threshold values for each classifier. For different thresholds, false-accept rate versus falsereject rate was plotted using the receiver operating characteristics curve. We observed lower error rates when we used trainable combiners. We conclude that combined multi-modal signal or classifier methods are very successful in biometric recognition and verification of a person in a car environment.
  • Yayın
    Facial expression recognition based on facial anatomy
    (Işık Üniversitesi, 2013-06-06) Benli, Kristin Surpuhi; Eskil, Mustafa Taner; Işık Üniversitesi, Fen Bilimleri Enstitüsü, Bilgisayar Mühendisliği Doktora Programı
    In this thesis we propose to determine the underlying muscle forces that compose a facial expression under the constraint of facial anatomy. Muscular activities are novel features that are highly representative of facial expressions. We model human face with a 3D generic wireframe model that embeds all major muscles. The input to our expression recognition system is a video with marked set of landmark points on the first frame. We use these points and a semi-automatic fitting algorithm to register the 3D face model to the subject's face. The influence regions of facial muscles are estimated and projected to the image plane to determine feature points. These points are tracked on the image plane using optical flow algorithm. We estimate the rigid body transformation of the head through a greedy search algorithm. This stage enables us to align the 3D face model with the subject's head in consecutive frames of the video. We use ray tracing from the perspective reference point and through the image plane to estimate the new coordinates of model vertices. The estimated vertex coordinates indicate how the subject's face is deformed in the progression of an expression. The relative motion of model vertices provides us an over-determined linear system of equations where unknown parameters are the muscle activation levels. This system of equations is solved using constrained least square optimization. Muscle activity based features are evaluated in a classification problem of seven basic facial expressions. We demonstrate the representative power of muscle force based features on four classifiers; Linear Discriminant Analysis, Naive Bayes, k-Nearest Neighbor and Support Vector Machine. The best performance on the classification problem of seven expressions including neutral was 87.1 %, obtained by use of Support Vector Machine. The results we attained in this study are close to the human recognition ceiling of 87-91.7 % and comparable with the state of the art algorithms in the literature.