You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to be able to generate a picture akin to that displayed in the read me, however even though I converge my model beyond the point in the read me, I do not get the distinct striations shown. Rather I get a more spread out graph still with some striations
Has anyone been able to replicate the Image as displayed in the paper and readme. Was it generated using an actual a 2d latent dim, or a higher dimension then PCAed down to 2d (I have tried both and neither has worked), any help would be greatly appreciated.
The text was updated successfully, but these errors were encountered:
Same problem here. I've trained the model on the 500k ChEMBL data set using a 292 dimensions latent space and after 30 epochs I got a loss: 0.4956 and acc: 0.955. However, I'm far from that performance when using a 2D latent space (loss: 2.8712 - acc: 0.7075 after 30 epochs). This is how data looks in the 2D latent space:
from pylab import figure, axes, scatter, title, show x_latent = model.encoder.predict(data_train) figure(figsize=(6, 6)) scatter(x_latent[:, 0], x_latent[:, 1], marker='.') show()
And this is how it looks using the first two principal components from the 292 latent space:
I would like to be able to generate a picture akin to that displayed in the read me, however even though I converge my model beyond the point in the read me, I do not get the distinct striations shown. Rather I get a more spread out graph still with some striations
Has anyone been able to replicate the Image as displayed in the paper and readme. Was it generated using an actual a 2d latent dim, or a higher dimension then PCAed down to 2d (I have tried both and neither has worked), any help would be greatly appreciated.
The text was updated successfully, but these errors were encountered: