Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Supervised Baseline / SSL-Method Possible Number of Epochs mismatch #75

Open
LouisW2202 opened this issue Nov 7, 2024 · 0 comments
Open

Comments

@LouisW2202
Copy link

LouisW2202 commented Nov 7, 2024

If I use 10% of the data as labeled, the supervised loader contains 1/10 of the training data, while the unsupervised loader contains 9/10. In the code, both loaders are zipped together, and the supervised loader is repeated 9 times to ensure that their lengths match. Consequently, at the end of one epoch, the supervised loader is utilized 9 times.

Therefore, when training the supervised baseline, the number of epochs should be adjusted accordingly. Specifically, I should train the supervised baseline for 9 times the number of epochs used for the SSL setting. Since there is nothing in the code to enforce this adjustment, and your work only mentions, "For optimization, we train for 50 epochs," I would like to know if you considered this issue in your work. Additionally, in the experiments where you compare the SSL-trained model with the supervised baseline, did you ensure that the number of supervised epochs matched?

Very interesting work anyway, I look forward to your insights on this matter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant