Identifying User-Specific Facial Affects from Spontaneous Expressions with Minimal Annotation

Michael Xuelin Huang, Grace Ngai, Kien A. Hua, Stephen C.F. Chan, Hong Va Leong
<span title="2016-10-01">2016</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/4bxbdgkmy5ea3prtrfsep6ycym" style="color: black;">IEEE Transactions on Affective Computing</a> </i> &nbsp;
This paper presents PADMA (Personalized Affect Detection with Minimal Annotation), a userdependent approach for identifying affective states from spontaneous facial expressions without the need for expert annotation. The conventional approach relies on the use of key frames in recorded affect sequences and requires an expert observer to identify and annotate the frames. It is susceptible to user variability and accommodating individual differences is difficult. The alternative is a
more &raquo; ... t approach, but it would be prohibitively expensive to collect and annotate data for each user. PADMA uses a novel Association-based Multiple Instance Learning (AMIL) method, which learns a personal facial affect model through expression frequency analysis, and does not need expert input or frame-based annotation. PADMA involves a training/calibration phase in which the user watches short video segments and reports the affect that best describes his/her overall feeling throughout the segment. The most indicative facial gestures are identified and extracted from the facial response video, and the association between gesture and affect labels is determined by the distribution of the gesture over all reported affects. Hence both the geometric deformation and distribution of key facial gestures are specially adapted for each user. We show results that demonstrate the feasibility, effectiveness and extensibility of our approach.
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/taffc.2015.2495222">doi:10.1109/taffc.2015.2495222</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/6h4d72q3ybcldgate4mmin7obu">fatcat:6h4d72q3ybcldgate4mmin7obu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20160429121930/http://www.cs.ucf.edu:80/~kienhua/classes/COP6731/Reading/AffectiveComputing.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ef/59/ef592ae272920a321d95274fbd1135976a525356.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/taffc.2015.2495222"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>