A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Multi-modal Asian Conversation Mobile Video Dataset for Recognition Task
2018
International Journal of Electrical and Computer Engineering (IJECE)
Images, audio, and videos have been used by researchers for a long time to develop several tasks regarding human facial recognition and emotion detection. Most of the available datasets usually focus on either static expression, a short video of changing emotion from neutral to peak emotion, or difference in sounds to detect the current emotion of a person. Moreover, the common datasets were collected and processed in the United States (US) or Europe, and only several datasets were originated
doi:10.11591/ijece.v8i5.pp4042-4046
fatcat:j2yccyzw7zdhjiuaf2mwkhnrae