-
I have a slight problem understanding the unsupervised loss function given in paper "Inductive Representation Learning on Large Graphs" pg.5 equation 1 J (zu ) = - log (σ (zu T zv )) -Q E vn ~P n(v) log (σ (-zu T zv )) I understand that the first half of the equation is trying to minimize the loss for nodes close to each other in random walk and the second half is trying to minimize the loss for negative samples. However, I think the symbols used in the equation confuse me a little.
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
I understand the equation now after studying Noise Contrastive Estimation. |
Beta Was this translation helpful? Give feedback.
I understand the equation now after studying Noise Contrastive Estimation.