Robust Collaborative Learning in the Presence of Malicious Clients
Alkhunaizi, Naif
Alkhunaizi, Naif
Author
Supervisor
Department
Machine Learning
Embargo End Date
Type
Thesis
Date
2022
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
We look at another collaborative learning paradigm, Split-Learning (SL), and explore it with Vision Transformers (VITs) to carry out a new structure. Specifically, we deal with model extraction attacks, where malicious client attempts to steal the components of the model that are not accessible to them. We propose a Secure Split Learning-based Vision Transformer (SSLViT) for image classification tasks, which enables collaboration between multiple clients without leaking clients private data and simultaneously guards against malicious clients attempting to infer the model parameters of the server. This is achieved by employing an ensemble of projection networks on the server and randomly projecting the intermediate features learned by ViT blocks before they are sent to the clients. While these random projections are still useful to the client and do not significantly impede the split learning process, they prevent a malicious client from learning the server s model parameters.We evaluate the proposed method on two publicly available datasets in the natural and medical imaging domains, CIFAR-100 and HAM10000, under IID and non-IID settings. Experiments demonstrate the effectiveness of the SSLViT framework in protecting the server s model parameters against extraction attacks, while still achieving positive collaborative performance gain, even when the malicious client has partial knowledge of the server s model architecture. All of the experiments were done with Pytorch using real-world datasets. The code can be found at https://github.com/Naiftt/SPAFD. are not accessible to them. We propose a Secure Split Learning-based Vision Transformer (SSLViT) for image classification tasks, which enables collaboration between multiple clients without leaking clients private data and simultaneously guards against malicious clients attempting to infer the model parameters of the server. This is achieved by employing an ensemble of projection networks on the server and randomly projecting the intermediate features learned by ViT blocks before they are sent to the clients. While these random projections are still useful to the client and do not significantly impede the split learning process, they prevent a malicious client from learning the server s model parameters.We evaluate the proposed method on two publicly available datasets in the natural and medical imaging domains, CIFAR-100 and HAM10000, under IID and non-IID settings. Experiments demonstrate the effectiveness of the SSLViT framework in protecting the server s model parameters against extraction attacks, while still achieving positive collaborative performance gain, even when the malicious client has partial knowledge of the server s model architecture. All of the experiments were done with Pytorch using real-world datasets.
Citation
N.T.B. Alkhunaizi, "Robust Collaborative Learning in the Presence of Malicious Clients", M.S. Thesis, Machine Learning, MBZUAI, Abu Dhabi, UAE, 2022.
