Show simple item record

dc.contributor.advisorYang, Yimin
dc.contributor.advisorWei, Ruizhong
dc.contributor.authorDeng, Haojin
dc.date.accessioned2023-03-09T18:57:37Z
dc.date.available2023-03-09T18:57:37Z
dc.date.created2022
dc.date.issued2022
dc.identifier.urihttps://knowledgecommons.lakeheadu.ca/handle/2453/5092
dc.description.abstractMachine learning is famous for its automatic data handling. While there is a slow growth in the performance of the state-of-the-art models in the most recent well-known learning frameworks, the number of parameters and training complexity rise unaware. Motivated by the present situation, we proposed two efficient methods to enhance the automation on some manual tasks and the efficiency of handling data, respectively. Emotion is one of the main psychological factors that affect human behaviour. A neural network model trained with Electroencephalography (EEG)-based frequency features have been widely used to recognize human emotions accurately. However, utilizing EEG-based spatial information with popular two-dimensional kernels of convolutional neural networks (CNN) has rarely been explored in the extant literature. We address these challenges by proposing an EEGbased Spatial-frequency-based framework for recognizing human emotion, resulting in fewer human-interaction parameters with better generalization performance. Specifically, we propose a two-stream hierarchical network framework that learns features from two networks, one trained from the frequency domain while another trained from the spatial domain. Our approach is extensively validated on the SEED, SEED-V, and DREAMER datasets. The experiments directly support that our motivation of utilizing the two-stream domain features significantly improves the final recognition performance. The experimental results show that the proposed spatial feature extraction method obtains valuable spatial features with less human interaction. Image classification is a classic problem in deep learning. As the state-of-the-art models became more profound and broader, fewer studies were devoted to utilizing data efficiently. Inspired by contrastive self-supervised learning frameworks, we proposed a supervised multi-label contrastive learning framework to improve the backbone model’s performance further. We verified our procedure on CIFAR10 and CIFAR100 datasets. With similar hyperparameters and the number of parameters, our approach outperformed the backbone and self-supervised learning models.en_US
dc.language.isoen_USen_US
dc.subjectMachine learningen_US
dc.subjectElectroencephalogramen_US
dc.subjectEEG emotional classificationen_US
dc.titleBoosting feature extraction performance on the aspect of representation learning efficiencyen_US
dc.typeThesisen_US
etd.degree.nameMaster of Scienceen_US
etd.degree.levelMasteren_US
etd.degree.disciplineComputer Scienceen_US
etd.degree.grantorLakehead Universityen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record