-
Notifications
You must be signed in to change notification settings - Fork 539
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About the loss. Sincerely, I would like to ask: #116
Comments
I think this line does it. |
log_prob = logits - torch.log(exp_logits.sum(1, keepdim=True)) The What confuses me is that this feels like it only minimizes the positive sample distance. The loss of maximizing negative samples is not in the final loss. I feel like I'm not understanding something, can you help me? |
|
he |
I also wonder that, at line Isn't that the distance between positive pairs are being the denominator because of |
The purpose of contrast loss is to minimize the positive sample distance while maximizing the negative sample distance. However, I only find minimizing the distance of positive samples in this loss, and I don't see maximizing the distance of negative samples? Can you tell me which codes achieve the maximum negative sample distance?
The text was updated successfully, but these errors were encountered: