Boosting feature extraction performance on the aspect of representation learning efficiency
MetadataShow full item record
Machine learning is famous for its automatic data handling. While there is a slow growth in the performance of the state-of-the-art models in the most recent well-known learning frameworks, the number of parameters and training complexity rise unaware. Motivated by the present situation, we proposed two efficient methods to enhance the automation on some manual tasks and the efficiency of handling data, respectively. Emotion is one of the main psychological factors that affect human behaviour. A neural network model trained with Electroencephalography (EEG)-based frequency features have been widely used to recognize human emotions accurately. However, utilizing EEG-based spatial information with popular two-dimensional kernels of convolutional neural networks (CNN) has rarely been explored in the extant literature. We address these challenges by proposing an EEGbased Spatial-frequency-based framework for recognizing human emotion, resulting in fewer human-interaction parameters with better generalization performance. Specifically, we propose a two-stream hierarchical network framework that learns features from two networks, one trained from the frequency domain while another trained from the spatial domain. Our approach is extensively validated on the SEED, SEED-V, and DREAMER datasets. The experiments directly support that our motivation of utilizing the two-stream domain features significantly improves the final recognition performance. The experimental results show that the proposed spatial feature extraction method obtains valuable spatial features with less human interaction. Image classification is a classic problem in deep learning. As the state-of-the-art models became more profound and broader, fewer studies were devoted to utilizing data efficiently. Inspired by contrastive self-supervised learning frameworks, we proposed a supervised multi-label contrastive learning framework to improve the backbone model’s performance further. We verified our procedure on CIFAR10 and CIFAR100 datasets. With similar hyperparameters and the number of parameters, our approach outperformed the backbone and self-supervised learning models.