You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I read the technical report and observed the impressive performance of the t2i generation using the provided demo code.
There is an issue with the image editing process using demo_image2image.py. I tried to edit an image generated by t2i generation, it doesn't work. Even after changing the CFG, there is no change. Could you please explain if there are hyperparameters to perform i2i editing?
Here is the result of editing:
The text was updated successfully, but these errors were encountered:
We do observe that for the editing task, the model has a strong inclination to keep the original image unchanged. I think it might be related to the small number of used editing data and the feature of the editing task per. se., namely during training, the model is supervised to keep the very most regions unchanged, making the gradient of the editing part swamped. Currently, for the editing task we suggest trying multiple times with different seeds (usually 5 is enough) and some of the trials should work
I read the technical report and observed the impressive performance of the t2i generation using the provided demo code.
There is an issue with the image editing process using demo_image2image.py. I tried to edit an image generated by t2i generation, it doesn't work. Even after changing the CFG, there is no change. Could you please explain if there are hyperparameters to perform i2i editing?
Here is the result of editing:
The text was updated successfully, but these errors were encountered: