A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Improved Deep Feature Learning by Synchronization Measurements for Multi-Channel EEG Emotion Recognition
2020
Complexity
Emotion recognition based on multichannel electroencephalogram (EEG) signals is a key research area in the field of affective computing. Traditional methods extract EEG features from each channel based on extensive domain knowledge and ignore the spatial characteristics and global synchronization information across all channels. This paper proposes a global feature extraction method that encapsulates the multichannel EEG signals into gray images. The maximal information coefficient (MIC) for
doi:10.1155/2020/6816502
fatcat:ejjirqbapbd5hjpmgmb6dfxqoe