Show simple item record

dc.contributor.advisorFadlullah, Zubair
dc.contributor.advisorFouda, Mostafa
dc.contributor.authorGad, Gad
dc.date.accessioned2023-06-14T14:28:40Z
dc.date.available2023-06-14T14:28:40Z
dc.date.created2023
dc.date.issued2023
dc.identifier.urihttps://knowledgecommons.lakeheadu.ca/handle/2453/5175
dc.description.abstractThe field of deep learning has experienced significant growth in recent years in various domains where data can be collected and processed. However, as data plays a central role in the deep learning revolution, there are risks associated with moving the data from where it is produced to central servers and data centers for processing. To address this issue, Federated Learning (FL) was introduced as a framework for collaboratively training a global model on distributed data. However, deploying FL comes with several unique challenges, including communication overhead and system and statistical heterogeneity. While FL is inherently private as clients don’t share local data, privacy is still a concern in the FL context since sensitive data can be leaked from the exchanged gradients. To address these challenges, this thesis proposes the incorporation of techniques such as Knowledge Distillation (KD) and Differential Privacy (DP) with FL. Specifically, a modelagnostic FL algorithm based on KD is proposed, called the Federated Learning algorithm based on Knowledge Distillation (FedAKD). FedAKD utilizes a shared dataset as a proxy dataset to calculate and transfer knowledge in the form of soft labels, which are then sent to the server for aggregation and broadcast back to clients to train on them in addition to local training. Additionally, we elaborate on applying Local Differential Privacy (LDP) where clients apply gradient clipping and noise injection according to the Differentially Private Stochastic Gradient Descent (DP-SGD). The FedAKD algorithm is evaluated utilizing Human Activity Recognition (HAR) datasets in terms of accuracy and communication efficiency.en_US
dc.language.isoen_USen_US
dc.subjectDeep Learning (DL)en_US
dc.subjectFederated Learning (FL)en_US
dc.subjectEmpirical Loss Minimization (ELM)en_US
dc.subjectFederated Learning algorithm based on Knowledge Distillation (FedAKD)en_US
dc.subjectLocal Differential Privacy (LDP)en_US
dc.subjectHuman Activity Recognition (HAR)en_US
dc.titleLight-weight federated learning with augmented knowledge distillation for human activity recognitionen_US
dc.typeThesisen_US
etd.degree.nameMaster of Scienceen_US
etd.degree.levelMasteren_US
etd.degree.disciplineComputer Scienceen_US
etd.degree.grantorLakehead Universityen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record