Marich aims to extract models using public data with three motives:
- Distributional Equivalence of Extracted Prediction Distribution
- Max-Information Extraction of the Target Model
- Query Efficiency
To achieve these goals, Marich uses an active learning algorithm to query and extract the target models
The attack framework is as given below:
Paper: https://arxiv.org/abs/2302.08466
Talk at PPAI Workshop at AAAI, 2023: Slides
The accuracies of competing active learning methods are shown along with Marich to present a comparison:
The accuracy curves shown above are respectively for:
- Logistic regression model trained on MNIST dataset extracted using another Logistic regression model with EMNIST queries.
- Logistic regression model trained on MNIST dataset extracted using another Logistic regression model with CIFAR10 queries.
- BERT trained on BBC News dataset extracted using another BERT with AG News queries.
- ResNet trained on CIFAR10 dataset extracted using a CNN with ImageNet queries.
Next we present the kl divergence between the outputs of the extracted models and the target models to compare the distributional equivalence of the models extracted by different algorithms. This is done on a separate subset of the training domain data.
The order of the extraction set ups are same as mentioned for the accuracies. The table below shows a portion of the results obtained during our experiments:
There are 4 folders:
bert_al: Contains K-Center, Least Confidence, Margin Sampling, Entropy Sampling and Random Sampling codes for BERT experiments
lr_cnn_res_al: Contains K-Center, Least Confidence, Margin Sampling, Entropy Sampling and Random Sampling codes for experiments on Logistic Regression, CNN and ResNet
bert_marich: Contains Marich codes for BERT experiments
lr_cnn_res_marich: Contains Marich codes for experiments on Logistic Regression, CNN and ResNet
The jupyter notebooks provided in the folders act as demo for the users.
To experiment with new data, one needs to:
- In data.py file, add compatible get_DATA function. Follow the structure of the existing get_DATA functions.
- In handlers.py file add a compatible Handler class. Follow the structure of the existing Handler classes.
- In case of Marich new data input is to be given following the jupyter notebooks.
To experiment with new models, one needs to:
- Add the corresponding model to the nets.py file. For the active learning algorithms, other than Marich, one must remember to modify the model to have a forward method returning the output and a preferred embedding, and have a method to return the embedding dimension.
For the K-Center, Least Confidence, Margin Sampling, Entropy Sampling and Random Sampling experiments, we have modified and used the codes from Huang, Kuan-Hao. "Deepal: Deep active learning in python." 2021. (Link: https://arxiv.org/pdf/2111.15258.pdf)
If you use or study any part of this repository, please cite it as:
@article{karmakar2023marich,
title={Marich: A Query-efficient Distributionally Equivalent Model Extraction Attack using Public Data},
author={Karmakar, Pratik and Basu, Debabrota},
journal={arXiv preprint arXiv:2302.08466},
year={2023}
}