Filters








14 Hits in 0.81 sec

3D Face Recognition: Technology and Applications [chapter]

Berk Gökberk, Albert Ali Salah, Neşe Alyüz, Lale Akarun
<span title="">2009</span> <i title="Springer London"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2zwb6qalljesljeooadcc6uqrq" style="color: black;">Advances in Pattern Recognition</a> </i> &nbsp;
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-1-84882-385-3_9">doi:10.1007/978-1-84882-385-3_9</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/is4cpqk32bdf7ffonocls2vjsm">fatcat:is4cpqk32bdf7ffonocls2vjsm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20180721005331/https://ir.cwi.nl/pub/13756/13756B.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/21/f2/21f2f1693fdb6478a8f3306f377f1ce7df6e036e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-1-84882-385-3_9"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Unobtrusive and Multimodal Approach for Behavioral Engagement Detection of Students [article]

Nese Alyuz, Eda Okur, Utku Genc, Sinem Aslan, Cagri Tanriover, Asli Arslan Esme
<span title="2019-01-16">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We propose a multimodal approach for detection of students' behavioral engagement states (i.e., On-Task vs. Off-Task), based on three unobtrusive modalities: Appearance, Context-Performance, and Mouse. Final behavioral engagement states are achieved by fusing modality-specific classifiers at the decision level. Various experiments were conducted on a student dataset collected in an authentic classroom.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1901.05835v1">arXiv:1901.05835v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/73njb4q24vd4zlbhesyjxsbziq">fatcat:73njb4q24vd4zlbhesyjxsbziq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200824145158/https://arxiv.org/pdf/1901.05835v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/95/0f/950fefef8e0708622908d91810ceadafcfc82f7a.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1901.05835v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Alternative face models for 3D face registration

Albert Ali Salah, Neşe Alyüz, Lale Akarun, Longin Jan Latecki, David M. Mount, Angela Y. Wu
<span title="2007-01-28">2007</span> <i title="SPIE"> Vision Geometry XV </i> &nbsp;
3D has become an important modality for face biometrics. The accuracy of a 3D face recognition system depends on a correct registration that aligns the facial surfaces and makes a comparison possible. The best results obtained so far use a one-to-all registration approach, which means each new facial surface is registered to all faces in the gallery, at a great computational cost. We explore the approach of registering the new facial surface to an average face model (AFM), which automatically
more &raquo; ... tablishes correspondence to the pre-registered gallery faces. Going one step further, we propose that using a couple of well-selected AFMs can trade-off computation time with accuracy. Drawing on cognitive justifications, we propose to employ category-specific alternative average face models for registration, which is shown to increase the accuracy of the subsequent recognition. We inspect thin-plate spline (TPS) and iterative closest point (ICP) based registration schemes under realistic assumptions on manual or automatic landmark detection prior to registration. We evaluate several approaches for the coarse initialization of ICP. We propose a new algorithm for constructing an AFM, and show that it works better than a recent approach. Finally, we perform simulations with multiple AFMs that correspond to different clusters in the face shape space and compare these with gender and morphology based groupings. We report our results on the FRGC 3D face database.
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1117/12.705860">doi:10.1117/12.705860</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/a3muhvowhjgoxn3saziozwmlwm">fatcat:a3muhvowhjgoxn3saziozwmlwm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170809143146/https://www.cmpe.boun.edu.tr/~salah/spie07alternative.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c9/0b/c90b109301244e59771fec431a8d50a78e395956.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1117/12.705860"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Registration of three-dimensional face scans with average face models

Neşe Alyüz
<span title="2008-01-01">2008</span> <i title="SPIE-Intl Soc Optical Eng"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2emxxxtvbjco7e2f75dok676dy" style="color: black;">Journal of Electronic Imaging (JEI)</a> </i> &nbsp;
Salah, Alyüz, and Akarun: Registration of three-dimensional face scans… Journal of Electronic Imaging Jan-Mar 2008/Vol. 17(1) 011006-6 Fig. 6 6 Shape space clustering distributions on the enriched training  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1117/1.2896291">doi:10.1117/1.2896291</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/giochutabbd37e3cueo7tqte6q">fatcat:giochutabbd37e3cueo7tqte6q</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170808102946/https://www.cmpe.boun.edu.tr/~salah/salah08jei.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/82/e7/82e7247ce1ab80845ce605cb9e27cedb9ff6145e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1117/1.2896291"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Bosphorus Database for 3D Face Analysis [chapter]

Arman Savran, Neşe Alyüz, Hamdi Dibeklioğlu, Oya Çeliktutan, Berk Gökberk, Bülent Sankur, Lale Akarun
<span title="">2008</span> <i title="Springer Berlin Heidelberg"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2w3awgokqne6te4nvlofavy5a4" style="color: black;">Lecture Notes in Computer Science</a> </i> &nbsp;
A new 3D face database that includes a rich set of expressions, systematic variation of poses and different types of occlusions is presented in this paper. This database is unique from three aspects: i) the facial expressions are composed of judiciously selected subset of Action Units as well as the six basic emotions, and many actors/actresses are incorporated to obtain more realistic expression data; ii) a rich set of head pose variations are available; and iii) different types of face
more &raquo; ... ons are included. Hence, this new database can be a very valuable resource for development and evaluation of algorithms on face recognition under adverse conditions and facial expression analysis as well as for facial expression synthesis.
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-540-89991-4_6">doi:10.1007/978-3-540-89991-4_6</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ftnnakscgjhdbalxtuguut2cdq">fatcat:ftnnakscgjhdbalxtuguut2cdq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190222192331/http://pdfs.semanticscholar.org/4254/fbba3846008f50671edc9cf70b99d7304543.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/42/54/4254fbba3846008f50671edc9cf70b99d7304543.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-540-89991-4_6"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Detecting Behavioral Engagement of Students in the Wild Based on Contextual and Visual Data [article]

Eda Okur, Nese Alyuz, Sinem Aslan, Utku Genc, Cagri Tanriover, Asli Arslan Esme
<span title="2019-01-15">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
To investigate the detection of students' behavioral engagement (On-Task vs. Off-Task), we propose a two-phase approach in this study. In Phase 1, contextual logs (URLs) are utilized to assess active usage of the content platform. If there is active use, the appearance information is utilized in Phase 2 to infer behavioral engagement. Incorporating the contextual information improved the overall F1-scores from 0.77 to 0.82. Our cross-classroom and cross-platform experiments showed the proposed
more &raquo; ... eneric and multi-modal behavioral engagement models' applicability to a different set of students or different subject areas.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1901.06291v1">arXiv:1901.06291v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ka5s2ufxt5ak3jfd7gurk52jai">fatcat:ka5s2ufxt5ak3jfd7gurk52jai</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200914043058/https://arxiv.org/pdf/1901.06291v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/b6/99/b699702063819f7ac5752a2ec381fffc3ff19705.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1901.06291v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Regional Registration for Expression Resistant 3-D Face Recognition

Neşe Alyuz, Berk Gokberk, Lale Akarun
<span title="">2010</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/xqa4uhvxwvgsbdpllabnuanz6e" style="color: black;">IEEE Transactions on Information Forensics and Security</a> </i> &nbsp;
Biometric identification from three-dimensional (3-D) facial surface characteristics has become popular, especially in high security applications. In this paper, we propose a fully automatic expression insensitive 3-D face recognition system. Surface deformations due to facial expressions are a major problem in 3-D face recognition. The proposed approach deals with such challenging conditions in several aspects. First, we employ a fast and accurate region-based registration scheme that uses
more &raquo; ... on region models. These common models make it possible to establish correspondence to all the gallery samples in a single registration pass. Second, we utilize curvature-based 3-D shape descriptors. Last, we apply statistical feature extraction methods. Since all the 3-D facial features are regionally registered to the same generic facial component, subspace construction techniques may be employed. We show that linear discriminant analysis significantly boosts the identification accuracy. We demonstrate the recognition ability of our system using the multiexpression Bosphorus and the most commonly used 3-D face database, Face Recognition Grand Challenge (FRGCv2). Our experimental results show that in both databases we obtain comparable performance to the best rank-1 correct classification rates reported in the literature so far: 98.19% for the Bosphorus and 97.51% for the FRGCv2 database. We have also carried out the standard receiver operating characteristics (ROC III) experiment for the FRGCv2 database. At an FAR of 0.1%, the verification performance was 86.09%. This shows that model-based registration is beneficial in identification scenarios where speed-up is important, whereas for verification one-to-one registration can be more beneficial.
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tifs.2010.2054081">doi:10.1109/tifs.2010.2054081</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/dll45jvugfgaflyjzkbwyxq6rq">fatcat:dll45jvugfgaflyjzkbwyxq6rq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20131209104549/http://vanderberk.com/docs/2010regionalregistration.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/84/5d/845d7e0f5b477e62b484a7fd628e80651167d500.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tifs.2010.2054081"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

The Importance of Socio-Cultural Differences for Annotating and Detecting the Affective States of Students [article]

Eda Okur, Sinem Aslan, Nese Alyuz, Asli Arslan Esme, Ryan S. Baker
<span title="2019-01-12">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The development of real-time affect detection models often depends upon obtaining annotated data for supervised learning by employing human experts to label the student data. One open question in annotating affective data for affect detection is whether the labelers (i.e., human experts) need to be socio-culturally similar to the students being labeled, as this impacts the cost feasibility of obtaining the labels. In this study, we investigate the following research questions: For affective
more &raquo; ... e annotation, how does the socio-cultural background of human expert labelers, compared to the subjects, impact the degree of consensus and distribution of affective states obtained? Secondly, how do differences in labeler background impact the performance of affect detection models that are trained using these labels?
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1901.03793v1">arXiv:1901.03793v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/xako7q6idzcbfdyg3a7f3la3e4">fatcat:xako7q6idzcbfdyg3a7f3la3e4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200922230453/https://arxiv.org/pdf/1901.03793v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/4d/b3/4db3088e77ef8c208832b65660952fbfd5eb10cb.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1901.03793v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Role of Socio-cultural Differences in Labeling Students' Affective States [chapter]

Eda Okur, Sinem Aslan, Nese Alyuz, Asli Arslan Esme, Ryan S. Baker
<span title="">2018</span> <i title="Springer International Publishing"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2w3awgokqne6te4nvlofavy5a4" style="color: black;">Lecture Notes in Computer Science</a> </i> &nbsp;
The development of real-time affect detection models often depends upon obtaining annotated data for supervised learning by employing human experts to label the student data. One open question in labeling affective data for affect detection is whether the labelers (i.e., human experts) need to be socioculturally similar to the students being labeled, as this impacts the cost and feasibility of obtaining the labels. In this study, we investigate the following research questions: For affective
more &raquo; ... te labeling, how does the socio-cultural background of human expert labelers, compared to the subjects (i.e., students), impact the degree of consensus and distribution of affective states obtained? Secondly, how do differences in labeler background impact the performance of affect detection models that are trained using these labels? To address these questions, we employed experts from Turkey and the United States to label the same data collected through authentic classroom pilots with students in Turkey. We analyzed withincountry and cross-country inter-rater agreements, finding that experts from Turkey obtained moderately better inter-rater agreement than the experts from the U.S., and the two groups did not agree with each other. In addition, we observed differences between the distributions of affective states provided by experts in the U.S. versus Turkey, and between the performances of the resulting affect detectors. These results suggest that there are indeed implications to using human experts who do not belong to the same population as the research subjects.
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-319-93843-1_27">doi:10.1007/978-3-319-93843-1_27</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/chsdwlmdb5cw3akwtyjlcb64ly">fatcat:chsdwlmdb5cw3akwtyjlcb64ly</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190226012707/http://pdfs.semanticscholar.org/6a9f/1135730a6c576b3647cc5327b3ced8104735.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/6a/9f/6a9f1135730a6c576b3647cc5327b3ced8104735.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-319-93843-1_27"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Towards Human Affect Modeling: A Comparative Analysis of Discrete Affect and Valence-Arousal Labeling [chapter]

Sinem Aslan, Eda Okur, Nese Alyuz, Asli Arslan Esme, Ryan S. Baker
<span title="">2018</span> <i title="Springer International Publishing"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/jyopc6cf5ze5vipjlm4aztcffi" style="color: black;">Communications in Computer and Information Science</a> </i> &nbsp;
There is still considerable disagreement on key aspects of affective computing -including even how affect itself is conceptualized. Using a multimodal student dataset collected while students were watching instructional videos and answering questions on a learning platform, we investigated the two key paradigms of how affect is represented through a comparative approach: (1) Affect as a set of discrete states and (2) Affect as a combination of a two-dimensional space of attributes. We
more &raquo; ... ly examined a set of discrete learningrelated affects (Satisfied, Confused, and Bored) that are hypothesized to map to specific locations within the Valence-Arousal dimensions of Circumplex Model of Emotion. For each of the key paradigms, we had five human experts label student affect on the dataset. We investigated two major research questions using their labels: (1) Whether the hypothesized mappings between discrete affects and Valence-Arousal are valid and (2) whether affect labeling is more reliable with discrete affect or Valence-Arousal. Contrary to the expected, the results show that discrete labels did not directly map to Valence-Arousal quadrants in Circumplex Model of Emotion. This indicates that the experts perceived and labeled these two relatively differently. On the other side, the inter-rater agreement results show that the experts moderately agreed with each other within both paradigms. These results imply that researchers and practitioners should consider how affect information would operationally be used in an intelligent system when choosing from the two key paradigms of affect.
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-319-92279-9_50">doi:10.1007/978-3-319-92279-9_50</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/2sgvn2aq3bay5eo76os4wivrri">fatcat:2sgvn2aq3bay5eo76os4wivrri</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190221034547/http://pdfs.semanticscholar.org/2c70/576186229f6fa3c64fe2bd85e8b283e312e9.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/2c/70/2c70576186229f6fa3c64fe2bd85e8b283e312e9.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-319-92279-9_50"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

3D Face Recognition Benchmarks on the Bosphorus Database with Focus on Facial Expressions [chapter]

Neşe Alyüz, Berk Gökberk, Hamdi Dibeklioğlu, Arman Savran, Albert Ali Salah, Lale Akarun, Bülent Sankur
<span title="">2008</span> <i title="Springer Berlin Heidelberg"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2w3awgokqne6te4nvlofavy5a4" style="color: black;">Lecture Notes in Computer Science</a> </i> &nbsp;
This paper presents an evaluation of several 3D face recognizers on the Bosphorus database which was gathered for studies on expression and pose invariant face analysis. We provide identification results of three 3D face recognition algorithms, namely generic face template based ICP approach, one-to-all ICP approach, and depth image-based Principal Component Analysis (PCA) method. All of these techniques treat faces globally and are usually accepted as baseline approaches. In addition, 2D
more &raquo; ... e classifiers are also incorporated in a fusion setting. Experimental results reveal that even though global shape classifiers achieve almost perfect identification in neutral-to-neutral comparisons, they are sub-optimal under extreme expression variations. We show that it is possible to boost the identification accuracy by focusing on the rigid facial regions and by fusing complementary information coming from shape and texture modalities.
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-540-89991-4_7">doi:10.1007/978-3-540-89991-4_7</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/q7ywuye77bfvvorqtiiq6wdduy">fatcat:q7ywuye77bfvvorqtiiq6wdduy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20141016185208/https://staff.fnwi.uva.nl/h.dibeklioglu/documents/bioid2008_exp.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d6/da/d6da110352db471119446bbb2ef8cdce3b24659a.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-540-89991-4_7"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

The ASC-Inclusion Perceptual Serious Gaming Platform for Autistic Children

Erik Marchi, Bjoern Schuller, Alice Baird, Simon Baron-Cohen, Amandine Lassalle, Helen O'Reilly, Delia Pigat, Peter Robinson, Ian Davies, Tadas Baltrusaitis, Andra Adams, Marwa Mahmoud (+12 others)
<span title="">2018</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/tvmhgvndondmjauwfc7sn4izxy" style="color: black;">IEEE Transactions on Games</a> </i> &nbsp;
Serious games' are becoming extremely relevant to individuals who have specific needs, such as children with an Autism Spectrum Condition (ASC). Often, individuals with an ASC have difficulties in interpreting verbal and non-verbal communication cues during social interactions. The ASC-Inclusion EU-FP7 funded project aims to provide children who have an ASC with a platform to learn emotion expression and recognition, through play in the virtual world. In particular, the ASC-Inclusion platform
more &raquo; ... cuses on the expression of emotion via facial, vocal, and bodily gestures. The platform combines multiple analysis tools, using on-board microphone and web-cam capabilities. The platform utilises these capabilities via training games, text-based communication, animations, video and audio clips. This paper introduces current findings and evaluations of the ASC-Inclusion platform and provides detailed description for the different modalities.
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tg.2018.2864640">doi:10.1109/tg.2018.2864640</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ojg7t7h2bzftxgxhtenkerm7qm">fatcat:ojg7t7h2bzftxgxhtenkerm7qm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190307215627/http://pdfs.semanticscholar.org/dd79/40143d65eb6242a434ed3bd14bc41c7fe66d.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/dd/79/dd7940143d65eb6242a434ed3bd14bc41c7fe66d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tg.2018.2864640"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Metilen Mavisinin Doğal Kil Üzerine Adsorpsiyonu

Serkan Bayar
<span title="2018-07-31">2018</span> <i title="Gumushane University Journal of Science and Technology Institute"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/qpvzhpk6ovclpffifx7svf424y" style="color: black;">Gümüşhane Üniversitesi Fen Bilimleri Enstitüsü Dergisi</a> </i> &nbsp;
Zeynep Neşe KURT'a teşekkür ederim. Almeida, C.A.P., Debacher, N.A., Downs, A.J., Cottet, L. ve Mello, C.A.D., 2009.  ...  ., 2006) , kil (Veli ve Alyüz, 2007) , bentonit (Bulut vd., 2008) ve aktif karbon (Khaled vd., 2009; Bangash ve Alam, 2009; Schimmel vd., 2010) yaygın olarak kullanılan adsorbanlardan bazılarıdır.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.17714/gumusfenbil.344748">doi:10.17714/gumusfenbil.344748</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/utxuvqfcrnexpmvrkvh4mczqga">fatcat:utxuvqfcrnexpmvrkvh4mczqga</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200313235003/https://dergipark.org.tr/tr/download/article-file/512400" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/7a/aa/7aaa15784de62cc93b6b82948b134d49ed2eae68.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.17714/gumusfenbil.344748"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

İki Şair: Hâfız-Fuzûli Karşılaştırması

Hamdi BİRGÖREN
<span title="2013-01-01">2013</span> <i title="Journal of Turkish Studies"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/kfdbgfcq7bhupdvvkgqmj4ufzq" style="color: black;">Turkish Studies</a> </i> &nbsp;
Kenan Alyüz vd., Fuzûlî Divanı, Akçağ Yayınları, Ankara 1990. 9 Zindeginame-i Hafız, Zindeginame-i Şuarâ ve Dânişmendân, Şiir ve Hüner.  ...  Hâfızâ mey hor ve rindî kon ve hoş bâş velî Dâm-ı tezvîr me-kon çûn digerân Kur'ân-râ G. 9/10 "Ey Hâfız, şarap iç, rintlik yap ve neşeli ol, fakat eline Kur'ân'ı alıp başkalarını aldatma yoluna tenezzül  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.7827/turkishstudies.4693">doi:10.7827/turkishstudies.4693</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/lsv7p6ksnzaefnrbjeph5vdxku">fatcat:lsv7p6ksnzaefnrbjeph5vdxku</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200207225102/http://www.turkishstudies.net/files/turkishstudies/1982665994_020Birg%C3%B6renHamdi-299-323.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f0/78/f07879647f8b466718c30243fffbe66402fa1234.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.7827/turkishstudies.4693"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> Publisher / doi.org </button> </a>